API Overview#
Here's a handy reference for the main building blocks of microgradpp.
Value Class#
The Value class is your go-to for automatic differentiation. Every scalar value in your computation should be a Value.
Creating Values#
auto x = microgradpp::Value::create(2.0);Supported Operations#
| Operation | Example |
|---|---|
| Addition | auto c = a + b; |
| Multiplication | auto c = a * b; |
| Power | auto c = a->pow(2); |
| Subtraction | auto c = a - b; |
Running Backprop#
auto z = x * y + x->pow(2);
z->backProp(); // Fills in .grad for all Values in the graph
std::cout << x->grad; // Gradient w.r.t. x
std::cout << y->grad; // Gradient w.r.t. yMLP Class#
The MLP class lets you define and train multi-layer perceptrons with minimal boilerplate.
Creating a Model#
// MLP(input_size, {hidden_sizes..., output_size})
MLP model(3, {4, 4, 1});Forward Pass#
auto output = model(input_data);Training Step#
// 1. Zero out gradients
model.zeroGrad();
// 2. Forward pass
auto output = model(input_data);
// 3. Compute loss
auto loss = /* your loss computation */;
// 4. Backprop
loss->backProp();
// 5. Update weights
// (implement your optimizer here)The training loop pattern above — zero grad → forward → loss → backprop → update — is the standard cycle used in virtually all neural network training. Get comfortable with it!
Tensor Class#
The Tensor class simplifies loading and manipulating data before feeding it into your models:
#include "microgradpp/Tensor.hpp"
// Load and manipulate your data with easeFor the full API details and advanced usage, check out the header files in the include/ directory — they're well-documented and easy to follow.