Telegram Web
πŸ“Œ Linear Attention Is All You Need

πŸ—‚ Category: LARGE LANGUAGE MODELS

πŸ•’ Date: 2024-06-02 | ⏱️ Read time: 10 min read

Self-attention at a fraction of the cost?
πŸ“Œ Measuring The Intrinsic Causal Influence Of Your Marketing Campaigns

πŸ—‚ Category: DATA SCIENCE

πŸ•’ Date: 2024-06-02 | ⏱️ Read time: 11 min read

Causal AI, exploring the integration of causal reasoning into machine learning
πŸ“Œ Comparing Country Sizes with GeoPandas

πŸ—‚ Category:

πŸ•’ Date: 2024-06-02 | ⏱️ Read time: 14 min read

How to project, shift, and rotate geospatial data
πŸ“Œ How I Use ChatGPT As A Data Scientist

πŸ—‚ Category: ARTIFICIAL INTELLIGENCE

πŸ•’ Date: 2024-06-02 | ⏱️ Read time: 8 min read

How ChatGPT improved my productivity as a data scientist
πŸ“Œ PRISM-Rules in Python

πŸ—‚ Category: DATA SCIENCE

πŸ•’ Date: 2024-06-02 | ⏱️ Read time: 14 min read

A simple python rules-induction system
πŸ“Œ Performance Insights from Sigma Rule Detections in Spark Streaming

πŸ—‚ Category: CYBERSECURITY

πŸ•’ Date: 2024-06-01 | ⏱️ Read time: 13 min read

Utilizing Sigma rules for anomaly detection in cybersecurity logs: A study on performance optimization
πŸ“Œ Why You Don’t Need JS to Make 3D plots

πŸ—‚ Category: DATA SCIENCE

πŸ•’ Date: 2024-06-01 | ⏱️ Read time: 6 min read

Visualizing crime geodata in python
πŸ“Œ AI Use Cases are Fundamentally Different

πŸ—‚ Category: ROBOTICS

πŸ•’ Date: 2024-05-31 | ⏱️ Read time: 9 min read

How to find unique use cases for AI and places where moderate AI performance is…
πŸ“Œ YOLO – Intuitively and Exhaustively Explained

πŸ—‚ Category: MACHINE LEARNING

πŸ•’ Date: 2024-05-31 | ⏱️ Read time: 31 min read

The genesis of the most widely used object detection models.
πŸ“Œ A Deep Dive into In-Context Learning

πŸ—‚ Category: NATURAL LANGUAGE PROCESSING

πŸ•’ Date: 2024-05-31 | ⏱️ Read time: 11 min read

Stepping out of the β€œcomfort zone” – part 2/3 of a deep-dive into domain adaptation…
πŸ“Œ Deep Dive into Anthropic’s Sparse Autoencoders by Hand

πŸ—‚ Category: LARGE LANGUAGE MODELS

πŸ•’ Date: 2024-05-31 | ⏱️ Read time: 12 min read

Explore the concepts behind the interpretability quest for LLMs
πŸ“Œ On-Device Machine Learning in Spatial Computing

πŸ—‚ Category: MACHINE LEARNING

πŸ•’ Date: 2025-02-17 | ⏱️ Read time: 18 min read

The landscape of computing is undergoing a profound transformation with the emergence of spatial computing…
πŸ“Œ Roadmap to Becoming a Data Scientist, Part 4: Advanced Machine Learning

πŸ—‚ Category: DATA SCIENCE

πŸ•’ Date: 2025-02-14 | ⏱️ Read time: 15 min read

Introduction Data science is undoubtedly one of the most fascinating fields today. Following significant breakthroughs in…
πŸ“Œ Building a Data Engineering Center of Excellence

πŸ—‚ Category: DATA ENGINEERING

πŸ•’ Date: 2025-02-13 | ⏱️ Read time: 11 min read

As data continues to grow in importance and become more complex, the need for skilled…
πŸ€–πŸ§  NanoChat: The Best ChatGPT That $100 Can Buy

πŸ—“οΈ 20 Oct 2025
πŸ“š AI News & Trends

In a world dominated by billion-dollar AI models like GPT-4 and Claude 3, it’s refreshing to see a minimalist, open-source alternative that puts the power of Large Language Models (LLMs) back into the hands of hackers, researchers and enthusiasts. Enter NanoChat – an end-to-end, full-stack implementation of a ChatGPT-style AI chatbot developed by Andrej Karpathy, ...

#NanoChat #ChatGPT #AI #LargeLanguageModels #OpenSource #AndrejKarpathy
πŸ€–πŸ§  NanoChat: The Best ChatGPT That $100 Can Buy

πŸ—“οΈ 20 Oct 2025
πŸ“š AI News & Trends

In a world dominated by billion-dollar AI models like GPT-4 and Claude 3, it’s refreshing to see a minimalist, open-source alternative that puts the power of Large Language Models (LLMs) back into the hands of hackers, researchers and enthusiasts. Enter NanoChat – an end-to-end, full-stack implementation of a ChatGPT-style AI chatbot developed by Andrej Karpathy, ...

#NanoChat #ChatGPT #AI #LargeLanguageModels #OpenSource #AndrejKarpathy
πŸ€–πŸ§  PaddleOCR-VL: Redefining Multilingual Document Parsing with a 0.9B Vision-Language Model

πŸ—“οΈ 20 Oct 2025
πŸ“š AI News & Trends

In an era where information is predominantly digital, the ability to extract, interpret and organize data from documents is crucial. From invoices and research papers to multilingual contracts and handwritten notes, document parsing stands at the intersection of vision and language. Traditional Optical Character Recognition (OCR) systems have made impressive strides but they often fall ...

#PaddleOCR-VL #Multilingual #DocumentParsing #VisionLanguageModel #OCR #AI
πŸ€–πŸ§  PaddleOCR-VL: Redefining Multilingual Document Parsing with a 0.9B Vision-Language Model

πŸ—“οΈ 20 Oct 2025
πŸ“š AI News & Trends

In an era where information is predominantly digital, the ability to extract, interpret and organize data from documents is crucial. From invoices and research papers to multilingual contracts and handwritten notes, document parsing stands at the intersection of vision and language. Traditional Optical Character Recognition (OCR) systems have made impressive strides but they often fall ...

#PaddleOCR-VL #Multilingual #DocumentParsing #VisionLanguageModel #OCR #AI
❀1
πŸ€–πŸ§  Top 30 More Retro Bollywood Diwali Portrait Prompts for Women Using Gemini AI – Part 2

πŸ—“οΈ 20 Oct 2025
πŸ“š AI News & Trends

The Diwali celebrations continue and so does the nostalgia! After the huge buzz around our Top 20 Retro Bollywood Diwali Portrait Ideas, we’re back with Part 2 featuring prompts 21 to 50 curated to help you create even more magical, cinematic AI portraits using Google Gemini AI. If you loved the 90s-style Diwali aesthetics shimmering ...
❀2
Here is a trick to optimize a neural network that gives about 4x speedup when transferring data from CPU to GPU.

Let's consider an image classification task.

We define the model, load and transform the data.

In the training loop, we transfer data to the GPU and train the network.

What's the problem:

If you look into the profiler,

- most of the resources go to the kernel (i.e., the training itself),
- but a noticeable amount of time is also spent on transferring data from CPU to GPU (cudaMemcpyAsync).

This can be easily reduced.

Initially, the dataset consists of pixels as 8-bit integers. We convert them to 32-bit floats.
Then we send these float tensors to the GPU. As a result, the data size becomes 4 times larger, making the transfer heavier.

The solution:

Shift the transformation step after the transfer. That is, first transfer the 8-bit ints, and then convert them to floats on the GPU.

As a result, the data transfer step speeds up significantly.

Of course, this doesn't work everywhere; for example, in NLP we initially deal with float embeddings.
But in cases where it applies, the speedup is very noticeable.

πŸ‘‰  @DataScienceM
Please open Telegram to view this post
VIEW IN TELEGRAM
❀4
2025/10/23 02:45:44
Back to Top
HTML Embed Code: