📃 Deep learning in microbiome analysis: a comprehensive review of neural network models
📎 Study the paper
@Machine_learn
📎 Study the paper
@Machine_learn
Frontiers
Frontiers | Deep learning in microbiome analysis: a comprehensive review of neural network models
📃 A Survey of Deep Learning Methods in Protein Bioinformatics and its Impact on Protein Design
📎 Study the paper
@Machine_learn
📎 Study the paper
@Machine_learn
This media is not supported in your browser
VIEW IN TELEGRAM
عيدكم مُبارك و كُلَّ عامٍ و انتم بالفِ ألفِ خير يارب
اسأل الله أن يعيد عليكم رمضان أعوامًا و أعوام و أن يتقبل مِنا و منكم صالِح الاعمال .🖤
@Machine_learn
اسأل الله أن يعيد عليكم رمضان أعوامًا و أعوام و أن يتقبل مِنا و منكم صالِح الاعمال .
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
⚡️ LLM4Decompile
🟡 Github
🟡 Models
🟡 Paper
🟡 Colab
@Machine_learn
git clone https://github.com/albertan017/LLM4Decompile.git
cd LLM4Decompile
conda create -n 'llm4decompile' python=3.9 -y
conda activate llm4decompile
pip install -r requirements.txt
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement
🖥 Github: https://github.com/yunncheng/MMRL
📕 Paper: https://arxiv.org/abs/2503.08497v1
🌟 Dataset: https://paperswithcode.com/dataset/imagenet-s
@Machine_learn
🌟 Dataset: https://paperswithcode.com/dataset/imagenet-s
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
با عرض سلام
در ادامه ی کار تحقیقاتی یک مقاله مروری در حوزه پاتولوژی رو می خواهیم بنویسیم. دوستانی که مایل هستن نفرات ۲ و ٣ این موضوع رو می تونن شرکت کنن.
✅ زمان شروع ۲۰ فروردین.
Journal: scientific reports https://www.nature.com/srep/
🔥 🔥 🔥 🔥
Price:
2: ٢٥ میلیون
3: ٢٠ ميليون
توضیحات کامل و نحوه نگارش هر بخش رو خودم کمک میکنم.
@Raminmousa
@Machine_learn
@Paper4money
در ادامه ی کار تحقیقاتی یک مقاله مروری در حوزه پاتولوژی رو می خواهیم بنویسیم. دوستانی که مایل هستن نفرات ۲ و ٣ این موضوع رو می تونن شرکت کنن.
Journal: scientific reports https://www.nature.com/srep/
Price:
2: ٢٥ میلیون
3: ٢٠ ميليون
توضیحات کامل و نحوه نگارش هر بخش رو خودم کمک میکنم.
@Raminmousa
@Machine_learn
@Paper4money
Please open Telegram to view this post
VIEW IN TELEGRAM
Nature
Scientific Reports
Scientific Reports publishes original research in all areas of the natural and clinical sciences. We believe that if your research is scientifically valid and ...
InfiniteYou: Flexible Photo Recrafting While Preserving Your Identity
20 Mar 2025 · Liming Jiang, Qing Yan, Yumin Jia, Zichuan Liu, Hao Kang, Xin Lu ·
Achieving flexible and high-fidelity identity-preserved image generation remains formidable, particularly with advanced Diffusion Transformers (DiTs) like FLUX. We introduce InfiniteYou (InfU), one of the earliest robust frameworks leveraging DiTs for this task. InfU addresses significant issues of existing methods, such as insufficient identity similarity, poor text-image alignment, and low generation quality and aesthetics. Central to InfU is InfuseNet, a component that injects identity features into the DiT base model via residual connections, enhancing identity similarity while maintaining generation capabilities. A multi-stage training strategy, including pretraining and supervised fine-tuning (SFT) with synthetic single-person-multiple-sample (SPMS) data, further improves text-image alignment, ameliorates image quality, and alleviates face copy-pasting. Extensive experiments demonstrate that InfU achieves state-of-the-art performance, surpassing existing baselines. In addition, the plug-and-play design of InfU ensures compatibility with various existing methods, offering a valuable contribution to the broader community.
Paper: https://arxiv.org/pdf/2503.16418v1.pdf
Code: https://github.com/bytedance/infiniteyou
Dataset: 10,000 People - Human Pose Recognition Data
@Machine_learn
20 Mar 2025 · Liming Jiang, Qing Yan, Yumin Jia, Zichuan Liu, Hao Kang, Xin Lu ·
Achieving flexible and high-fidelity identity-preserved image generation remains formidable, particularly with advanced Diffusion Transformers (DiTs) like FLUX. We introduce InfiniteYou (InfU), one of the earliest robust frameworks leveraging DiTs for this task. InfU addresses significant issues of existing methods, such as insufficient identity similarity, poor text-image alignment, and low generation quality and aesthetics. Central to InfU is InfuseNet, a component that injects identity features into the DiT base model via residual connections, enhancing identity similarity while maintaining generation capabilities. A multi-stage training strategy, including pretraining and supervised fine-tuning (SFT) with synthetic single-person-multiple-sample (SPMS) data, further improves text-image alignment, ameliorates image quality, and alleviates face copy-pasting. Extensive experiments demonstrate that InfU achieves state-of-the-art performance, surpassing existing baselines. In addition, the plug-and-play design of InfU ensures compatibility with various existing methods, offering a valuable contribution to the broader community.
Paper: https://arxiv.org/pdf/2503.16418v1.pdf
Code: https://github.com/bytedance/infiniteyou
Dataset: 10,000 People - Human Pose Recognition Data
@Machine_learn
📃 A Comprehensive Guide to Validating Bioinformatics Findings: From In Silico to In Vitro
📎 Study the paper
@Machine_learn
📎 Study the paper
@Machine_learn
LHM: Large Animatable Human Reconstruction Model from a Single Image in Seconds
Animatable 3D human reconstruction from a single image is a challenging problem due to the ambiguity in decoupling geometry, appearance, and deformation. Recent advances in 3D human reconstruction mainly focus on static human modeling, and the reliance of using synthetic 3D scans for training limits their generalization ability. Conversely, optimization-based video methods achieve higher fidelity but demand controlled capture conditions and computationally intensive refinement processes. Motivated by the emergence of large reconstruction models for efficient static reconstruction, we propose LHM (Large Animatable Human Reconstruction Model) to infer high-fidelity avatars represented as 3D Gaussian splatting in a feed-forward pass. Our model leverages a multimodal transformer architecture to effectively encode the human body positional features and image features with attention mechanism, enabling detailed preservation of clothing geometry and texture. To further boost the face identity preservation and fine detail recovery, we propose a head feature pyramid encoding scheme to aggregate multi-scale features of the head regions. Extensive experiments demonstrate that our LHM generates plausible animatable human in seconds without post-processing for face and hands, outperforming existing methods in both reconstruction accuracy and generalization ability.
Paper: https://arxiv.org/pdf/2503.10625v1.pdf
Code: https://github.com/aigc3d/LHM
@Machine_learn
Animatable 3D human reconstruction from a single image is a challenging problem due to the ambiguity in decoupling geometry, appearance, and deformation. Recent advances in 3D human reconstruction mainly focus on static human modeling, and the reliance of using synthetic 3D scans for training limits their generalization ability. Conversely, optimization-based video methods achieve higher fidelity but demand controlled capture conditions and computationally intensive refinement processes. Motivated by the emergence of large reconstruction models for efficient static reconstruction, we propose LHM (Large Animatable Human Reconstruction Model) to infer high-fidelity avatars represented as 3D Gaussian splatting in a feed-forward pass. Our model leverages a multimodal transformer architecture to effectively encode the human body positional features and image features with attention mechanism, enabling detailed preservation of clothing geometry and texture. To further boost the face identity preservation and fine detail recovery, we propose a head feature pyramid encoding scheme to aggregate multi-scale features of the head regions. Extensive experiments demonstrate that our LHM generates plausible animatable human in seconds without post-processing for face and hands, outperforming existing methods in both reconstruction accuracy and generalization ability.
Paper: https://arxiv.org/pdf/2503.10625v1.pdf
Code: https://github.com/aigc3d/LHM
@Machine_learn
Harnessing the Reasoning Economy: A Survey of Efficient Reasoning for Large Language Models
🖥 Github: https://github.com/devoallen/awesome-reasoning-economy-papers
📕 Paper: https://arxiv.org/abs/2503.24377v1
@Machine_learn
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Github LLMs
Please open Telegram to view this post
VIEW IN TELEGRAM
📄Multimodal deep learning approaches for precision oncology: a comprehensive review
📎 Study the paper
@Machine_learn
📎 Study the paper
@Machine_learn
با عرض سلام
در ادامه ی کار تحقیقاتی یک مقاله مروری در حوزه پاتولوژی رو می خواهیم بنویسیم. دوستانی که مایل هستن نفرات ۲ و ٣ این موضوع رو می تونن شرکت کنن.
✅ زمان شروع ۲۰ فروردین.
Journal: scientific reports https://www.nature.com/srep/
🔥 🔥 🔥 🔥
Price:
2: ٢٥ میلیون
3: ٢٠ ميليون
توضیحات کامل و نحوه نگارش هر بخش رو خودم کمک میکنم.
@Raminmousa
@Machine_learn
@Paper4money
در ادامه ی کار تحقیقاتی یک مقاله مروری در حوزه پاتولوژی رو می خواهیم بنویسیم. دوستانی که مایل هستن نفرات ۲ و ٣ این موضوع رو می تونن شرکت کنن.
Journal: scientific reports https://www.nature.com/srep/
Price:
2: ٢٥ میلیون
3: ٢٠ ميليون
توضیحات کامل و نحوه نگارش هر بخش رو خودم کمک میکنم.
@Raminmousa
@Machine_learn
@Paper4money
Please open Telegram to view this post
VIEW IN TELEGRAM
Nature
Scientific Reports
Scientific Reports publishes original research in all areas of the natural and clinical sciences. We believe that if your research is scientifically valid and ...