Telegram Web
This media is not supported in your browser
VIEW IN TELEGRAM
VoRA: Vision as LoRA
#ByteDance introduces #VoRA (Vision as #LoRA) — a novel framework that transforms #LLMs into Multimodal Large Language Models (MLLMs) by integrating vision-specific LoRA layers.
All training data, source code, and model weights are openly available!

Key Resources:
Overview: https://t.ly/guNVN
Paper: arxiv.org/pdf/2503.20680
GitHub Repo: github.com/Hon-Wong/VoRA
Project Page: georgeluimmortal.github.io/vora-homepage.github.io

@Machine_learn
Forwarded from Papers
با عرض سلام
از اين مقاله نفرات ٤ و ٥ باقي مونده دوستاني كه مايل به همكاري هستن لطفا با بنده در ارتباط باشن.


یکی از ابزارهای خوبی که بنده تونستم توسعه بدم ابزار Stock Ai می باشد. در این ابزار از ۳۶۰ اندیکاتور استفاده کردم. گزارشات back test این ابزار در ویدیو های زیر موجود می باشد.

May 2024 :

https://youtu.be/aSS99lynMFQ?si=QSk8VVKhLqO_2Qi3

July 2014:

https://youtu.be/ThyZ0mZwsGk?si=FKPK7Hkz-mRx-752&t=209



@Raminmousa
Llama-Nemotron: Efficient Reasoning Models

📚 Paper

@Machine_learn
Introducing Continuous Thought Machines

📚 Paper

@Machine_learn
NVIDIA just open sourced Open Code Reasoning models - 32B, 14B AND 7B - APACHE 2.0 licensed 🔥

> Beats O3 mini & O1 (low) on LiveCodeBench 😍

Backed by OCR dataset the models are 30% token efficient than other equivalent Reasoning models

Works with llama.cpp, vLLM, transformers, TGI and more - check them out today!!

https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-32B

@Machine_learn
A New Efficient Hybrid Technique for Human Action Recognition Using 2D Conv-RBM and LSTM with Optimized Frame Selection


📕 Paper: https://www.mdpi.com/2227-7080/13/2/53

🔥 Datasets:
KTH: https://www.csc.kth.se/cvap/actions/
UCF Sports: https://www.crcv.ucf.edu/research/data-sets/ucf-sports-action/
HMDB51: https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/

@Machine_learn
Comprehensive Analysis of Random Forest and XGBoost Performance with SMOTE, ADASYN, and GNUS Under Varying Imbalance Levels.


📕 Paper: https://www.mdpi.com/2227-7080/13/3/88

🔥 Dataset: https://www.kaggle.com/code/rinichristy/customer-churn-prediction-2020

@Machine_learn
DeepSeek-Coder

DeepSeek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and an extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, DeepSeek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.

Creator: Deepseek-AI
Stars ⭐️: 15.6k
Forked by: 1.5k

Github Repo:
https://github.com/deepseek-ai/DeepSeek-Coder

@Machine_learn
Full PyTorch Implementation of
Compressive Transformer


📚 Link


@Machine_learn
Deliberation on Priors: Trustworthy Reasoning of Large Language Models on Knowledge Graphs

🖥 Github: https://github.com/reml-group/deliberation-on-priors

📕 Paper: https://arxiv.org/abs/2505.15210v1

@Machine_learn
Reinforcement Learning: An Overview

📚 Book


@Machine_learn
The 2025 AI Index Report

📚 Read

@Machine_learn
🎓Advanced Applications of Machine Learning in Bioinformatics



🗓Publish year: 2025

📎 Study thesis


@Machine_learn
Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning

24 Apr 2025 · Minju Seo, Jinheon Baek, Seongyun Lee, Sung Ju Hwang ·



Paper: https://arxiv.org/pdf/2504.17192v2.pdf

Code: https://github.com/going-doer/paper2code

@Machine_learn
2025/06/27 20:56:46
Back to Top
HTML Embed Code: