跳转至主要内容

Taichi Blogs

GPU-Accelerated Collision Detection and Taichi DEM Optimization Challenge
2022年12月22日 | Yuanming Hu, Qian Bao
Numerical simulation and computer graphics usually involve collision detection of a massive number of particles (in many cases, millions of particles). Regular operations, such as particle movement and boundary handling, can be handled in O(N) time complexity (N refers to the number of particles). But the complexity of collision detection can easily escalate to O(N^2) if no optimization is made, imposing an algorithmic bottleneck. A commonly-used technique is grid-based neighborhood search. By confining the search for collision-prone particles to a small area, we can reduce the computational complexity of collision detection back to O(N). This article takes a minimal 2D discrete element method (DEM) solver as an example and presents a highly efficient implementation of neighborhood search using Taichi's data structures.
了解更多
Pythonic Supercomputing: Scaling Taichi Programs with MPI4Py
2022年12月7日 | Haidong Lan
Nvidia unveiled its Tesla V100 GPU accelerator, which has since become a must-have model for deep learning, at GTC (GPU Technology Conference) 2017 in Beijing. It was on the same occasion that Jensen Huang, Nvidia's CEO, solemnly gave us the most sincere advice, which kept resonating in our heads for years to come:
了解更多
Taichi's Quantized Data Types: Same Computational Code, Optimized GPU Memory Usage
2022年11月18日 | Yi Xu
Starting from v1.1.0, Taichi provides quantized data types. But why is quantization important, especially in scenarios where Taichi stands out, such as physical simulation? This blog demonstrates how this new feature reduces your GPU memory usage significantly and requires zero change to your computational code.
了解更多
How Taichi Fuels GPU-accelerated Image Processing: A Beginner to Expert Guide
2022年11月4日 | Yuanming Hu, Liang Zhao
GPU-accelerated image processing tutorial
了解更多
Taichi & PyTorch 03: Accelerate PyTorch with Taichi - Data Preprocessing & High-performance ML Operator Customization
2022年9月15日 | Ailing Zhang, Haidong Lan
Our previous blogs (Taichi & PyTorch 01 and 02) pointed out that Taichi and Torch serve different application scenarios can they complement each other? And the answer is an unequivocal yes! In this blog, we will use two simple examples to explain how to use Taichi kernel to implement data preprocessing operators or custom ML operators. With Taichi, you can accelerate your ML model development with ease and get rid of the tedious low-level parallel programming (CUDA for example) for good.
了解更多
Training a magic fountain using Taichi's autodiff, an efficient tool for differentiable physical simulation
2022年7月14日 | Mingrui Zhang
With the generated gradient information, a differentiable physical simulator can make the convergence of the machine learning process one order of magnitude faster than gradient-free algorithms, such as model-free reinforcement learning.
了解更多
How I created the tranquil autumn air within 99 lines of Python code
2022年5月16日 | Yuanming Hu
On a Sunday afternoon about a couple of months ago, when Ye and I were on our way back from a long week of travel, we decided to do something to relax on the train ( to kill time). Since we happened to mention Minecraft and MagicaVoxel, we decided to do a Hackathon, where we use Taichi Lang to create a GPU path tracing voxel renderer. Soon, before we were back home, we had our prototype:
了解更多
Head First Taichi: A Beginner's Guide to High Performance Computing in Python
2021年10月12日 | Dunfan Lu
Ever since the Python programming language was born, its core philosophy has always been to maximize the readability and simplicity of code. In fact, the reach for readability and simplicity is so deep within Python's root, that if you type import this in a Python console, it will recite a little poem:
了解更多
Subscribe to our updates

Get the latest news from the Taichi Lang community in a monthly email: Groundbreaking releases, upcoming events, new insights, community updates, and more!

We'll never share your information with anyone else and you can opt out at any time.