Data-engineer-handbook
This is a repo with links to everything you'd ever want to learn about data engineering
Creator: DataExpert-io
Stars ⭐️: 24.9k
Forked by: 4.9k
Github Repo:
https://github.com/DataExpert-io/data-engineer-handbook
#github
➖➖➖➖➖➖➖➖➖➖➖➖➖➖
@Machine_learn
This is a repo with links to everything you'd ever want to learn about data engineering
Creator: DataExpert-io
Stars ⭐️: 24.9k
Forked by: 4.9k
Github Repo:
https://github.com/DataExpert-io/data-engineer-handbook
#github
➖➖➖➖➖➖➖➖➖➖➖➖➖➖
@Machine_learn
GitHub
GitHub - DataExpert-io/data-engineer-handbook: This is a repo with links to everything you'd ever want to learn about data engineering
This is a repo with links to everything you'd ever want to learn about data engineering - DataExpert-io/data-engineer-handbook
This media is not supported in your browser
VIEW IN TELEGRAM
⛽ VoRA: Vision as LoRA ⛽
#ByteDance introduces #VoRA (Vision as #LoRA) — a novel framework that transforms #LLMs into Multimodal Large Language Models (MLLMs) by integrating vision-specific LoRA layers.
All training data, source code, and model weights are openly available!
Key Resources:
Overview: https://t.ly/guNVN
Paper: arxiv.org/pdf/2503.20680
GitHub Repo: github.com/Hon-Wong/VoRA
Project Page: georgeluimmortal.github.io/vora-homepage.github.io
@Machine_learn
#ByteDance introduces #VoRA (Vision as #LoRA) — a novel framework that transforms #LLMs into Multimodal Large Language Models (MLLMs) by integrating vision-specific LoRA layers.
All training data, source code, and model weights are openly available!
Key Resources:
Overview: https://t.ly/guNVN
Paper: arxiv.org/pdf/2503.20680
GitHub Repo: github.com/Hon-Wong/VoRA
Project Page: georgeluimmortal.github.io/vora-homepage.github.io
@Machine_learn
Forwarded from Papers
با عرض سلام
از اين مقاله نفرات ٤ و ٥ باقي مونده دوستاني كه مايل به همكاري هستن لطفا با بنده در ارتباط باشن.
یکی از ابزارهای خوبی که بنده تونستم توسعه بدم ابزار Stock Ai می باشد. در این ابزار از ۳۶۰ اندیکاتور استفاده کردم. گزارشات back test این ابزار در ویدیو های زیر موجود می باشد.
May 2024 :
https://youtu.be/aSS99lynMFQ?si=QSk8VVKhLqO_2Qi3
July 2014:
https://youtu.be/ThyZ0mZwsGk?si=FKPK7Hkz-mRx-752&t=209
@Raminmousa
از اين مقاله نفرات ٤ و ٥ باقي مونده دوستاني كه مايل به همكاري هستن لطفا با بنده در ارتباط باشن.
یکی از ابزارهای خوبی که بنده تونستم توسعه بدم ابزار Stock Ai می باشد. در این ابزار از ۳۶۰ اندیکاتور استفاده کردم. گزارشات back test این ابزار در ویدیو های زیر موجود می باشد.
May 2024 :
https://youtu.be/aSS99lynMFQ?si=QSk8VVKhLqO_2Qi3
July 2014:
https://youtu.be/ThyZ0mZwsGk?si=FKPK7Hkz-mRx-752&t=209
@Raminmousa
YouTube
May 2024 Backtest Smart AI Signal Telegram Channel #telegram_to_mt4 #telegramsignals
-------------------------------------------------------------------------------------
For the next 30 days, you can USE PROMO CODE LAUNCH70 to get 70% off your subscription of the mltiplai.com database.
FOR MORE INFO VISIT US AT
✅ https://mltiplai.com
✅…
For the next 30 days, you can USE PROMO CODE LAUNCH70 to get 70% off your subscription of the mltiplai.com database.
FOR MORE INFO VISIT US AT
✅ https://mltiplai.com
✅…
NVIDIA just open sourced Open Code Reasoning models - 32B, 14B AND 7B - APACHE 2.0 licensed 🔥
> Beats O3 mini & O1 (low) on LiveCodeBench 😍
Backed by OCR dataset the models are 30% token efficient than other equivalent Reasoning models
Works with llama.cpp, vLLM, transformers, TGI and more - check them out today!!
https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-32B
@Machine_learn
> Beats O3 mini & O1 (low) on LiveCodeBench 😍
Backed by OCR dataset the models are 30% token efficient than other equivalent Reasoning models
Works with llama.cpp, vLLM, transformers, TGI and more - check them out today!!
https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-32B
@Machine_learn
A New Efficient Hybrid Technique for Human Action Recognition Using 2D Conv-RBM and LSTM with Optimized Frame Selection
📕 Paper: https://www.mdpi.com/2227-7080/13/2/53
🔥 Datasets:
KTH: https://www.csc.kth.se/cvap/actions/
UCF Sports: https://www.crcv.ucf.edu/research/data-sets/ucf-sports-action/
HMDB51: https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/
@Machine_learn
📕 Paper: https://www.mdpi.com/2227-7080/13/2/53
🔥 Datasets:
KTH: https://www.csc.kth.se/cvap/actions/
UCF Sports: https://www.crcv.ucf.edu/research/data-sets/ucf-sports-action/
HMDB51: https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/
@Machine_learn
Comprehensive Analysis of Random Forest and XGBoost Performance with SMOTE, ADASYN, and GNUS Under Varying Imbalance Levels.
📕 Paper: https://www.mdpi.com/2227-7080/13/3/88
🔥 Dataset: https://www.kaggle.com/code/rinichristy/customer-churn-prediction-2020
@Machine_learn
📕 Paper: https://www.mdpi.com/2227-7080/13/3/88
🔥 Dataset: https://www.kaggle.com/code/rinichristy/customer-churn-prediction-2020
@Machine_learn
Exercises in Machine Learning
Download, read, and practice:
arxiv.org/pdf/2206.13446
GitHub Repo: https://github.com/michaelgutmann/ml-pen-and-paper-exercises
@Machine_learn
Download, read, and practice:
arxiv.org/pdf/2206.13446
GitHub Repo: https://github.com/michaelgutmann/ml-pen-and-paper-exercises
@Machine_learn
DeepSeek-Coder
DeepSeek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and an extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, DeepSeek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
Creator: Deepseek-AI
Stars ⭐️: 15.6k
Forked by: 1.5k
Github Repo:
https://github.com/deepseek-ai/DeepSeek-Coder
@Machine_learn
DeepSeek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and an extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, DeepSeek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
Creator: Deepseek-AI
Stars ⭐️: 15.6k
Forked by: 1.5k
Github Repo:
https://github.com/deepseek-ai/DeepSeek-Coder
@Machine_learn
GitHub
GitHub - deepseek-ai/DeepSeek-Coder: DeepSeek Coder: Let the Code Write Itself
DeepSeek Coder: Let the Code Write Itself. Contribute to deepseek-ai/DeepSeek-Coder development by creating an account on GitHub.
probability_cheatsheet.pdf
789.3 KB
Probability Cheatsheet
@Machine_learn
@Machine_learn
Deliberation on Priors: Trustworthy Reasoning of Large Language Models on Knowledge Graphs
🖥 Github: https://github.com/reml-group/deliberation-on-priors
📕 Paper: https://arxiv.org/abs/2505.15210v1
@Machine_learn
🖥 Github: https://github.com/reml-group/deliberation-on-priors
📕 Paper: https://arxiv.org/abs/2505.15210v1
@Machine_learn
🎓Advanced Applications of Machine Learning in Bioinformatics
🗓Publish year: 2025
📎 Study thesis
@Machine_learn
🗓Publish year: 2025
📎 Study thesis
@Machine_learn
Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning
24 Apr 2025 · Minju Seo, Jinheon Baek, Seongyun Lee, Sung Ju Hwang ·
Paper: https://arxiv.org/pdf/2504.17192v2.pdf
Code: https://github.com/going-doer/paper2code
@Machine_learn
24 Apr 2025 · Minju Seo, Jinheon Baek, Seongyun Lee, Sung Ju Hwang ·
Paper: https://arxiv.org/pdf/2504.17192v2.pdf
Code: https://github.com/going-doer/paper2code
@Machine_learn