API Overview

A quick reference for the core classes and methods in microgradpp.

API Overview#

Here's a handy reference for the main building blocks of microgradpp.

Value Class#

The Value class is your go-to for automatic differentiation. Every scalar value in your computation should be a Value.

Creating Values#

C++
auto x = microgradpp::Value::create(2.0);

Supported Operations#

OperationExample
Additionauto c = a + b;
Multiplicationauto c = a * b;
Powerauto c = a->pow(2);
Subtractionauto c = a - b;

Running Backprop#

C++
auto z = x * y + x->pow(2);
z->backProp();  // Fills in .grad for all Values in the graph
 
std::cout << x->grad;  // Gradient w.r.t. x
std::cout << y->grad;  // Gradient w.r.t. y

MLP Class#

The MLP class lets you define and train multi-layer perceptrons with minimal boilerplate.

Creating a Model#

C++
// MLP(input_size, {hidden_sizes..., output_size})
MLP model(3, {4, 4, 1});

Forward Pass#

C++
auto output = model(input_data);

Training Step#

C++
// 1. Zero out gradients
model.zeroGrad();
 
// 2. Forward pass
auto output = model(input_data);
 
// 3. Compute loss
auto loss = /* your loss computation */;
 
// 4. Backprop
loss->backProp();
 
// 5. Update weights
// (implement your optimizer here)

The training loop pattern above — zero grad → forward → loss → backprop → update — is the standard cycle used in virtually all neural network training. Get comfortable with it!

Tensor Class#

The Tensor class simplifies loading and manipulating data before feeding it into your models:

C++
#include "microgradpp/Tensor.hpp"
 
// Load and manipulate your data with ease

For the full API details and advanced usage, check out the header files in the include/ directory — they're well-documented and easy to follow.