Telegram Web
Llama-Nemotron: Efficient Reasoning Models

πŸ“š Paper

@Machine_learn
Introducing Continuous Thought Machines

πŸ“š Paper

@Machine_learn
NVIDIA just open sourced Open Code Reasoning models - 32B, 14B AND 7B - APACHE 2.0 licensed πŸ”₯

> Beats O3 mini & O1 (low) on LiveCodeBench 😍

Backed by OCR dataset the models are 30% token efficient than other equivalent Reasoning models

Works with llama.cpp, vLLM, transformers, TGI and more - check them out today!!

https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-32B

@Machine_learn
A New Efficient Hybrid Technique for Human Action Recognition Using 2D Conv-RBM and LSTM with Optimized Frame Selection


πŸ“• Paper: https://www.mdpi.com/2227-7080/13/2/53

πŸ”₯ Datasets:
KTH: https://www.csc.kth.se/cvap/actions/
UCF Sports: https://www.crcv.ucf.edu/research/data-sets/ucf-sports-action/
HMDB51: https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/

@Machine_learn
Comprehensive Analysis of Random Forest and XGBoost Performance with SMOTE, ADASYN, and GNUS Under Varying Imbalance Levels.


πŸ“• Paper: https://www.mdpi.com/2227-7080/13/3/88

πŸ”₯ Dataset: https://www.kaggle.com/code/rinichristy/customer-churn-prediction-2020

@Machine_learn
Brownian Motion

πŸ“š Book

@Machine_learn
DeepSeek-Coder

DeepSeek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and an extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, DeepSeek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.

Creator: Deepseek-AI
Stars ⭐️: 15.6k
Forked by: 1.5k

Github Repo:
https://github.com/deepseek-ai/DeepSeek-Coder

@Machine_learn
Full PyTorch Implementation of
Compressive Transformer


πŸ“š Link


@Machine_learn
Deliberation on Priors: Trustworthy Reasoning of Large Language Models on Knowledge Graphs

πŸ–₯ Github: https://github.com/reml-group/deliberation-on-priors

πŸ“• Paper: https://arxiv.org/abs/2505.15210v1

@Machine_learn
Reinforcement Learning: An Overview

πŸ“š Book


@Machine_learn
The 2025 AI Index Report

πŸ“š Read

@Machine_learn
πŸŽ“Advanced Applications of Machine Learning in Bioinformatics



πŸ—“Publish year: 2025

πŸ“Ž Study thesis


@Machine_learn
Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning

24 Apr 2025 Β· Minju Seo, Jinheon Baek, Seongyun Lee, Sung Ju Hwang Β·



Paper: https://arxiv.org/pdf/2504.17192v2.pdf

Code: https://github.com/going-doer/paper2code

@Machine_learn
THE WAY OF CODE The Timeless Art of Vibe Coding

πŸ“š link


@Machine_learn
EgoDex: Learning Dexterous Manipulation from Large-Scale Egocentric Video

πŸ“š Paper

@Machine_learn
TabSTAR: A Foundation Tabular Model With
Semantically Target-Aware Representations


πŸ“š Paper

@Machine_learn
System Card: Claude Opus 4 & Claude Sonnet 4

πŸ“š Book


@Machine_learn
2025/06/28 18:46:33
Back to Top
HTML Embed Code: