Introduction
Micrograd is a tiny yet powerful Autograd engine designed to facilitate the implementation of neural networks. Developed by Andrej Karpathy, this project provides a straightforward way to perform backpropagation over a dynamically built Directed Acyclic Graph (DAG). With its PyTorch-like API, Micrograd is an excellent tool for both educational purposes and practical applications in machine learning.
Features
- Dynamic Computation Graph: Build and modify the graph on-the-fly, allowing for flexible model architectures.
- Simple API: A user-friendly interface that mimics PyTorch, making it easy for developers familiar with that framework.
- Educational Focus: Ideal for learning and teaching the fundamentals of neural networks and backpropagation.
- Lightweight: The entire engine is compact, with only about 100 lines of code for the autograd functionality.
Installation
To get started with Micrograd, you can easily install it using pip. Run the following command in your terminal:
pip install micrograd
Usage
Micrograd allows you to create and manipulate values seamlessly. Here’s a simple example demonstrating its capabilities:
from micrograd.engine import Value
a = Value(-4.0)
b = Value(2.0)
c = a + b
d = a * b + b**3
c += c + 1
c += 1 + c + (-a)
d += d * 2 + (b + a).relu()
d += 3 * d + (b - a).relu()
e = c - d
f = e**2
g = f / 2.0
g += 10.0 / f
print(f'{g.data:.4f}') # prints 24.7041, the outcome of this forward pass
g.backward()
print(f'{a.grad:.4f}') # prints 138.8338, i.e. the numerical value of dg/da
print(f'{b.grad:.4f}') # prints 645.5773, i.e. the numerical value of dg/db
This code snippet illustrates how to perform basic operations and compute gradients using Micrograd.
Training a Neural Network
Micrograd also supports training neural networks. The demo.ipynb
notebook provides a comprehensive example of training a 2-layer neural network for binary classification. Here’s a brief overview of how it works:
# Example of initializing a neural net
from micrograd import nn
n = nn.Neuron(2)
x = [Value(1.0), Value(-2.0)]
y = n(x)
In this example, a neuron is created, and input values are passed to it. The notebook demonstrates how to achieve a decision boundary using a simple SVM loss function and SGD for optimization.
Tracing / Visualization
For those interested in visualizing the computation graph, the trace_graph.ipynb
notebook allows you to generate Graphviz visualizations. Here’s how you can create a visual representation of a simple neuron:
from micrograd import nn
dot = draw_dot(y)
This will produce a visual output showing both the data and gradients at each node in the graph.
Benefits of Using Micrograd
- Educational Value: Perfect for students and educators looking to understand the mechanics of neural networks.
- Lightweight and Efficient: With a small codebase, it’s easy to integrate and modify for specific needs.
- Community Support: Being open-source, it encourages contributions and collaboration among developers.
Conclusion/Resources
Micrograd is a remarkable tool for anyone interested in understanding and implementing neural networks. Its simplicity and educational focus make it a valuable resource for both beginners and experienced developers. For more information, check out the official repository:
FAQ
What is Micrograd?
Micrograd is a lightweight Autograd engine that implements backpropagation over a dynamically built computation graph, making it ideal for educational purposes and simple neural network implementations.
How do I install Micrograd?
You can install Micrograd easily using pip by running the command pip install micrograd
in your terminal.
Can I visualize the computation graph?
Yes, Micrograd provides a way to visualize the computation graph using Graphviz. You can find examples in the trace_graph.ipynb
notebook.