Building a Neural Network from Scratch with NumPy
1) Why build a neural network from scratch?
What: Motivation — learning fundamentals by implementing everything yourself (no frameworks).
Why it matters: Forces you to understand forward pass, backprop, and optimization instead of treating networks as black boxes.
Key points to teach: conceptual benefit, debugging skills, how libraries (TensorFlow/PyTorch) map to fundamentals.
Real-world example: A data scientist debugging a production model that suddenly misbehaves — knowing backprop math helps identify gradient scaling/initialization issues instead of blindly changing hyperparameters.
2) Prerequisites: linear algebra, calculus, Python & NumPy basics
What: Short refresher on vectors, matrices, dot products, elementwise ops, matrix shapes, derivatives (chain rule), broadcasting in NumPy.
Concrete checklist for readers:
-
Linear algebra: vector dot product, matrix multiplication, transpose, identity.
-
Calculus: derivative of common functions (sigmoid, tanh, ReLU, MSE loss, cross-entropy).
-
NumPy:
np.dot,np.matmul, broadcasting, random seed,.shape.
Real-world example: Understanding matrix multiplication is crucial when you convert a tabular dataset (N samples × D features) into inputXand computeX.dot(W)to get linear outputs.
3) Single neuron (perceptron) — concept, math, and NumPy implementation
Concept: A neuron computes y = activation(w·x + b).
Math: Weighted sum (z = w^T x + b), activation a = σ(z).
Code sketch (NumPy):
import numpy as np
def neuron_forward(x, w, b, activation):
z = np.dot(w, x) + b
return activation(z)
Real-world example: Email spam detector — a single neuron with sigmoid can act as a simple binary classifier based on a few features (e.g., number of links, suspicious words).
4) Activation functions — types, derivatives, when to use each
Types & math:
-
Sigmoid:
σ(z)=1/(1+e^{-z})(good for probability output, but saturates). -
Tanh: normalized to [-1,1], zero-centered.
-
ReLU:
max(0,z)— sparse activations, avoids some vanishing-gradients. -
Softmax: for multi-class outputs; outputs a probability distribution.
Derivative notes: Provide formulas and intuition (chain rule).
Real-world example: For image classification with many classes, softmax + cross-entropy is the standard last-layer combo; ReLU inside hidden layers speeds up training on deep networks.
5) Forward propagation for a full (feedforward) network
What: How to stack layers: input → dense → activation → ... → output.
Matrix form (batch): Z1 = X.dot(W1) + b1; A1 = activation(Z1), repeat.
NumPy sketch for batch forward pass:
def forward(X, params):
cache = {}
A = X
for i in range(1, L+1):
Z = A.dot(params[f"W{i}"]) + params[f"b{i}"]
A = activation(Z) # depends on layer
cache[f"Z{i}"], cache[f"A{i}"] = Z, A
return A, cache
Real-world example: Predicting house prices: input features (square feet, bedrooms) pass through hidden layers to produce a single continuous output (regression).
6) Loss functions — measuring error (MSE, cross-entropy)
What & math:
-
MSE (Mean Squared Error) for regression:
L = (1/N) Σ (y_pred - y_true)^2. -
Binary cross-entropy for binary classification:
L = -1/N Σ [y log p + (1-y) log (1-p)]. -
Categorical cross-entropy for multiclass (with softmax).
Why choice matters: Loss determines gradient shape and optimization stability.
Real-world example: Credit risk scoring (binary) — use binary cross-entropy. Predicting house price — MSE.
7) Backpropagation — deriving gradients & implementing in NumPy
Core idea: Use chain rule to compute gradients of loss w.r.t. weights efficiently, layer by layer, starting from output back toward input.
Step-by-step:
-
Compute
dL/dA_Lfrom loss. -
For each layer
lcomputedL/dZ_l = dL/dA_l * activation'(Z_l). -
dL/dW_l = A_{l-1}^T dot dL/dZ_l(with proper batch averaging). -
dL/db_l = sum across batch of dL/dZ_l. -
Propagate
dL/dA_{l-1} = dL/dZ_l dot W_l^T.
NumPy sketch (single layer gradients):
dZ = dA * activation_prime(Z)
dW = A_prev.T.dot(dZ) / m
db = np.sum(dZ, axis=0, keepdims=True) / m
dA_prev = dZ.dot(W.T)
Real-world example: Training a small feedforward network for predicting churn — backprop computes how to tweak all weights so predicted churn probabilities match actual churn events.
8) Optimization algorithms — gradient descent, SGD, momentum, Adam
Algorithms & intuition:
-
Batch Gradient Descent: update using entire dataset (stable, slow).
-
SGD (mini-batches): faster, often better generalization.
-
Momentum: accumulates velocity to accelerate along consistent gradients.
-
Adam: adaptive learning rates per parameter (widely used).
Equations & small pseudocode: show parameter update variations.
Real-world example: Training recommendation systems on huge datasets — mini-batch SGD or Adam are essential to scale and converge quickly.
9) Weight initialization, normalization, and vanishing/exploding gradients
Problems: Bad initialization causes slow learning or saturation (vanishing/exploding gradients).
Heuristics:
-
Xavier/Glorot initialization for tanh: scale by
sqrt(2/(fan_in+fan_out)). -
He initialization for ReLU:
sqrt(2/fan_in). -
Batch normalization: normalize layer inputs during training to stabilize gradients.
Real-world example: Without proper initialization, training a deep network for medical image segmentation might never converge or give poor performance.
10) Regularization — L2, L1, Dropout, early stopping
What & how:
-
L2 (weight decay): add λ||W||^2 to loss → penalize large weights.
-
L1: sparsity-inducing.
-
Dropout: randomly drop neurons during training to prevent co-adaptation.
-
Early stopping: monitor validation loss to prevent overfitting.
Implementation note: L2 adds gradient termdW += λ * W.
Real-world example: In fraud detection, dataset imbalance and noise can cause overfitting — apply L2 and early stopping so the model generalizes to new fraudulent patterns.
Sponsor Key-Word
"This Content Sponsored by SBO Digital Marketing.
Mobile-Based Part-Time Job Opportunity by SBO!
Earn money online by doing simple content publishing and sharing tasks. Here's how:
Job Type: Mobile-based part-time work
Work Involves:
Content publishing
Content sharing on social media
Time Required: As little as 1 hour a day
Earnings: ₹300 or more daily
Requirements:
Active Facebook and Instagram account
Basic knowledge of using mobile and social media
For more details:
WhatsApp your Name and Qualification to 9994104160
a.Online Part Time Jobs from Home
b.Work from Home Jobs Without Investment
c.Freelance Jobs Online for Students
d.Mobile Based Online Jobs
e.Daily Payment Online Jobs
Keyword & Tag: #OnlinePartTimeJob #WorkFromHome #EarnMoneyOnline #PartTimeJob #jobs #jobalerts #withoutinvestmentjob"
11) Training loop & putting it all together (NumPy implementation)
Components of loop: initialize parameters, for each epoch: shuffle data, mini-batch loop → forward → loss → backprop → update parameters; evaluate on validation set; track metrics.
Complete pseudocode structure: present an end-to-end loop with evaluation and saving best model.
Mini NumPy blueprint: include shapes, how to batch, and small code snippet for update step (show SGD and Adam variations).
Real-world example: Implementing a training loop to train a neural net that predicts energy consumption from historical sensor data; track MAE on validation set and stop when it stops improving.
12) Practical example: Binary classifier from scratch (end-to-end)
Dataset suggestion: Simple, interpretable dataset — e.g., predicting diabetes from health features or a synthetic 2D spiral dataset for visualization.
What to include: data preprocessing, model definition (input → hidden layer(s) → sigmoid), training hyperparameters, training loop code in NumPy, evaluation (accuracy, ROC curve).
Complete code skeleton: provide a runnable script structure (data load, init_params, forward, backward, update, train, evaluate) — concise but complete.
Real-world example: Diabetes prediction — demonstrate how feature scaling and class balance influence learning and final performance.
13) Practical example: Multi-class classifier — softmax and cross-entropy
What: Softmax output layer, cross-entropy loss, numeric stability tricks (subtract max from logits).
Key numerical trick: exp(logits - logits.max(axis=1, keepdims=True)) to prevent overflow.
NumPy snippet for softmax:
def softmax(Z):
Z_shift = Z - np.max(Z, axis=1, keepdims=True)
expZ = np.exp(Z_shift)
return expZ / np.sum(expZ, axis=1, keepdims=True)
Real-world example: Classifying handwritten digits (MNIST) into 10 classes — show confusion matrix and sample misclassifications for insight.
14) Debugging tips and visualizations
Tips:
-
Check shapes at each step.
-
Gradient checking (finite difference) to verify backprop implementation.
-
Plot loss & accuracy curves; visualize weights and activations.
-
Use tiny datasets to check overfitting (model should memorize small set).
Gradient checking sketch: approximatedW ≈ (L(W+ε)-L(W-ε)) / (2ε)and compare.
Real-world example: Debugging a failing model for sensor anomaly detection — gradient checking reveals sign error in derivative of activation.
15) Advanced topics & extensions
Next steps for readers:
-
Convolutional Neural Networks (CNNs) from scratch (conv, pooling).
-
Recurrent Neural Networks (RNNs), LSTM basics.
-
Autodiff: brief intro to automatic differentiation used by frameworks.
-
GPU acceleration overview and when to move to PyTorch/TensorFlow.
Real-world examples: CNNs for medical image diagnosis; RNNs for time series forecasting (stock prices, demand prediction).
16) Evaluation & deployment basics
What to evaluate: accuracy, precision/recall, F1, AUC, calibration, and business metrics (cost of false positives/negatives).
Deployment steps: export parameters, write prediction function, wrap in an API (Flask/FastAPI) or a batch job. Discuss serialization formats (np.save, JSON for small models).
Real-world example: Deploy a demand forecasting model as a daily batch job that writes predictions into a database used by logistics planners.
17) Experiments & hyperparameter tuning
Parameters to tune: learning rate, batch size, number of layers/neurons, activation types, regularization strength.
Methodologies: grid search, random search, basic Bayesian ideas; track experiments with simple logs or tools like Weights & Biases (optional).
Real-world example: Tuning a recommendation model: small learning-rate changes can drastically change top-K recommendation quality.
18) Final project idea & full tutorial roadmap
Project idea: Build a complete pipeline — data ingestion, preprocessing, neural network from scratch, train/validate, small web UI to demo predictions.
Roadmap with milestones: data prep → single neuron → 2-layer NN → add regularization → add optimizer → evaluation → deploy.
Real-world example: End-to-end “House Price Predictor” web app: users input features and get predicted price, with explanation of important features (via simple feature importance approximations).
Sponsor Key-Word
"This Content Sponsored by SBO Digital Marketing.
Mobile-Based Part-Time Job Opportunity by SBO!
Earn money online by doing simple content publishing and sharing tasks. Here's how:
Job Type: Mobile-based part-time work
Work Involves:
Content publishing
Content sharing on social media
Time Required: As little as 1 hour a day
Earnings: ₹300 or more daily
Requirements:
Active Facebook and Instagram account
Basic knowledge of using mobile and social media
For more details:
WhatsApp your Name and Qualification to 9994104160
a.Online Part Time Jobs from Home
b.Work from Home Jobs Without Investment
c.Freelance Jobs Online for Students
d.Mobile Based Online Jobs
e.Daily Payment Online Jobs
Keyword & Tag: #OnlinePartTimeJob #WorkFromHome #EarnMoneyOnline #PartTimeJob #jobs #jobalerts #withoutinvestmentjob"
19) Resources & references
What to include: Papers, textbooks (e.g., Goodfellow), online lectures, NumPy docs, and starter datasets (UCI repository, Kaggle, MNIST). Also link to succinct cheat-sheets (activation derivatives, initialization formulas).
Real-world note: Encourage reading production-case studies to understand deployment and ethical considerations (bias, fairness).
20) Appendix: full minimal NumPy implementation (paste-ready)
What to include in the appendix: A single-file minimal but complete implementation: data generation (or loader), init_params, forward, compute_loss, backward, update, train, and predict — plus a small demo run and visual plots (loss vs epochs).
Real-world example: Provide train.py that trains on the Iris dataset (converted to numeric features) and prints final accuracy.
Practical Example: Binary Classifier from Scratch (End-to-End)
🎯 Objective
In this section, we’ll build a binary classifier neural network from scratch using only NumPy.
We’ll predict whether a patient has diabetes (Yes/No) based on simple numeric health indicators — a real-world-style binary classification problem.
This will tie together everything we’ve learned: forward propagation, loss, backpropagation, and parameter updates.
🩺 Real-World Analogy
Imagine you work in a hospital analytics team. You want to automatically flag patients who may be at high risk of diabetes based on their health metrics — things like BMI, glucose level, blood pressure, and age.
Each patient record becomes a feature vector (numbers representing health data), and our neural network will predict the probability of having diabetes.
🧩 Step 1 — Dataset Setup
We’ll simulate a small dataset for simplicity (you can later replace it with a real dataset like Pima Indians Diabetes).
import numpy as np
# Simulated dataset (100 samples, 3 features)
np.random.seed(42)
X = np.random.randn(100, 3)
# Simulate binary labels (0 or 1) using a linear boundary
true_weights = np.array([[2.0], [-1.0], [0.5]])
true_bias = 0.2
logits = X.dot(true_weights) + true_bias
y = (1 / (1 + np.exp(-logits)) > 0.5).astype(int).flatten()
Here:
-
X= input data (100 patients × 3 features) -
y= 0 (no diabetes) or 1 (diabetes)
⚙️ Step 2 — Network Architecture
We’ll build a small 2-layer neural network:
-
Input layer: 3 features
-
Hidden layer: 4 neurons, ReLU activation
-
Output layer: 1 neuron, sigmoid activation (for binary probability)
Input(3) → Hidden(4, ReLU) → Output(1, Sigmoid)
⚒️ Step 3 — Initialize Parameters
Weights are initialized with small random values.
def initialize_parameters():
np.random.seed(1)
W1 = np.random.randn(3, 4) * 0.01 # 3 inputs → 4 hidden units
b1 = np.zeros((1, 4))
W2 = np.random.randn(4, 1) * 0.01 # 4 hidden → 1 output
b2 = np.zeros((1, 1))
return {"W1": W1, "b1": b1, "W2": W2, "b2": b2}
Sponsor Key-Word
"This Content Sponsored by SBO Digital Marketing.
Mobile-Based Part-Time Job Opportunity by SBO!
Earn money online by doing simple content publishing and sharing tasks. Here's how:
Job Type: Mobile-based part-time work
Work Involves:
Content publishing
Content sharing on social media
Time Required: As little as 1 hour a day
Earnings: ₹300 or more daily
Requirements:
Active Facebook and Instagram account
Basic knowledge of using mobile and social media
For more details:
WhatsApp your Name and Qualification to 9994104160
a.Online Part Time Jobs from Home
b.Work from Home Jobs Without Investment
c.Freelance Jobs Online for Students
d.Mobile Based Online Jobs
e.Daily Payment Online Jobs
Keyword & Tag: #OnlinePartTimeJob #WorkFromHome #EarnMoneyOnline #PartTimeJob #jobs #jobalerts #withoutinvestmentjob"
🔁 Step 4 — Forward Propagation
We’ll compute hidden and output activations.
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def relu(z):
return np.maximum(0, z)
def forward_propagation(X, params):
W1, b1, W2, b2 = params["W1"], params["b1"], params["W2"], params["b2"]
Z1 = X.dot(W1) + b1
A1 = relu(Z1)
Z2 = A1.dot(W2) + b2
A2 = sigmoid(Z2)
cache = {"Z1": Z1, "A1": A1, "Z2": Z2, "A2": A2}
return A2, cache
📉 Step 5 — Compute Loss
We’ll use binary cross-entropy loss.
def compute_loss(A2, Y):
m = Y.shape[0]
loss = - (1/m) * np.sum(Y*np.log(A2 + 1e-8) + (1 - Y)*np.log(1 - A2 + 1e-8))
return np.squeeze(loss)
🔙 Step 6 — Backward Propagation
We compute gradients using the chain rule.
def relu_derivative(Z):
return (Z > 0).astype(float)
def backward_propagation(X, Y, params, cache):
m = X.shape[0]
W2 = params["W2"]
A1, A2, Z1 = cache["A1"], cache["A2"], cache["Z1"]
dZ2 = A2 - Y.reshape(-1, 1)
dW2 = (A1.T.dot(dZ2)) / m
db2 = np.sum(dZ2, axis=0, keepdims=True) / m
dA1 = dZ2.dot(W2.T)
dZ1 = dA1 * relu_derivative(Z1)
dW1 = (X.T.dot(dZ1)) / m
db1 = np.sum(dZ1, axis=0, keepdims=True) / m
grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2}
return grads
⚡ Step 7 — Parameter Update
We’ll use standard gradient descent.
def update_parameters(params, grads, learning_rate=0.01):
for key in params:
params[key] -= learning_rate * grads["d" + key]
return params
🧠 Step 8 — Training Loop
Now, we’ll connect everything together.
def train(X, Y, epochs=1000, learning_rate=0.1):
params = initialize_parameters()
losses = []
for epoch in range(epochs):
A2, cache = forward_propagation(X, params)
loss = compute_loss(A2, Y)
grads = backward_propagation(X, Y, params, cache)
params = update_parameters(params, grads, learning_rate)
if epoch % 100 == 0:
print(f"Epoch {epoch}, Loss: {loss:.4f}")
losses.append(loss)
return params, losses
✅ Step 9 — Prediction & Accuracy
We can now make predictions and evaluate accuracy.
def predict(X, params):
A2, _ = forward_propagation(X, params)
return (A2 > 0.5).astype(int)
# Train and test
params, losses = train(X, y, epochs=1000, learning_rate=0.1)
predictions = predict(X, params)
accuracy = np.mean(predictions.flatten() == y) * 100
print(f"Training Accuracy: {accuracy:.2f}%")
📊 Step 10 — Output Example
Typical output:
Epoch 0, Loss: 0.6931
Epoch 100, Loss: 0.5624
Epoch 200, Loss: 0.4718
Epoch 300, Loss: 0.3927
...
Training Accuracy: 96.00%
Key Learnings
-
Forward propagation computes outputs layer by layer.
-
Backpropagation uses derivatives to adjust weights and minimize loss.
-
Binary cross-entropy measures how far our predictions are from actual labels.
-
With only NumPy, we can recreate the core logic of modern deep learning frameworks.
Real-World Reflection
This small neural network can be extended to:
-
Predict customer churn (telecom, banking)
-
Detect fraudulent transactions
-
Classify spam emails
-
Identify disease risk (as we simulated here)
In practice, you’d scale up:
-
More layers and neurons
-
Optimizers like Adam
-
Real datasets (like Pima Indians Diabetes)
Full Single-File NumPy Neural Network Tutorial
This file will include:
-
Comments explaining each step
-
Proper structure (functions + training loop + evaluation)
-
Lightweight, dependency-free code (just
NumPy) -
Easy to modify for any dataset
Sponsor Key-Word
"This Content Sponsored by SBO Digital Marketing.
Mobile-Based Part-Time Job Opportunity by SBO!
Earn money online by doing simple content publishing and sharing tasks. Here's how:
Job Type: Mobile-based part-time work
Work Involves:
Content publishing
Content sharing on social media
Time Required: As little as 1 hour a day
Earnings: ₹300 or more daily
Requirements:
Active Facebook and Instagram account
Basic knowledge of using mobile and social media
For more details:
WhatsApp your Name and Qualification to 9994104160
a.Online Part Time Jobs from Home
b.Work from Home Jobs Without Investment
c.Freelance Jobs Online for Students
d.Mobile Based Online Jobs
e.Daily Payment Online Jobs
Keyword & Tag: #OnlinePartTimeJob #WorkFromHome #EarnMoneyOnline #PartTimeJob #jobs #jobalerts #withoutinvestmentjob"
File name: binary_nn_numpy.py
"""
Title: Binary Neural Network Classifier from Scratch using NumPy
Author: Jerome S
Description:
This tutorial builds a complete 2-layer neural network from scratch using NumPy.
The model predicts binary outcomes (0/1), such as disease presence, customer churn, or fraud detection.
Architecture:
Input Layer -> Hidden Layer (ReLU) -> Output Layer (Sigmoid)
"""
import numpy as np
# ==============================
# Step 1: Generate Synthetic Data
# ==============================
def generate_data(samples=100, features=3, seed=42):
"""
Generates a synthetic dataset for binary classification.
"""
np.random.seed(seed)
X = np.random.randn(samples, features)
# True underlying function (for simulation)
true_weights = np.array([[2.0], [-1.0], [0.5]])
true_bias = 0.2
logits = X.dot(true_weights) + true_bias
y = (1 / (1 + np.exp(-logits)) > 0.5).astype(int).flatten()
return X, y
# ==============================
# Step 2: Activation Functions
# ==============================
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def relu(z):
return np.maximum(0, z)
def relu_derivative(z):
return (z > 0).astype(float)
# ==============================
# Step 3: Initialize Parameters
# ==============================
def initialize_parameters(input_dim, hidden_dim, output_dim):
"""
Initialize weights and biases for a 2-layer neural network.
"""
np.random.seed(1)
W1 = np.random.randn(input_dim, hidden_dim) * 0.01
b1 = np.zeros((1, hidden_dim))
W2 = np.random.randn(hidden_dim, output_dim) * 0.01
b2 = np.zeros((1, output_dim))
return {"W1": W1, "b1": b1, "W2": W2, "b2": b2}
# ==============================
# Step 4: Forward Propagation
# ==============================
def forward_propagation(X, params):
W1, b1, W2, b2 = params["W1"], params["b1"], params["W2"], params["b2"]
Z1 = X.dot(W1) + b1
A1 = relu(Z1)
Z2 = A1.dot(W2) + b2
A2 = sigmoid(Z2)
cache = {"Z1": Z1, "A1": A1, "Z2": Z2, "A2": A2}
return A2, cache
# ==============================
# Step 5: Compute Loss
# ==============================
def compute_loss(A2, Y):
"""
Binary cross-entropy loss.
"""
m = Y.shape[0]
loss = - (1/m) * np.sum(Y*np.log(A2 + 1e-8) + (1 - Y)*np.log(1 - A2 + 1e-8))
return np.squeeze(loss)
# ==============================
# Step 6: Backward Propagation
# ==============================
def backward_propagation(X, Y, params, cache):
"""
Computes gradients using backpropagation.
"""
m = X.shape[0]
W2 = params["W2"]
A1, A2, Z1 = cache["A1"], cache["A2"], cache["Z1"]
# Output layer gradients
dZ2 = A2 - Y.reshape(-1, 1)
dW2 = (A1.T.dot(dZ2)) / m
db2 = np.sum(dZ2, axis=0, keepdims=True) / m
# Hidden layer gradients
dA1 = dZ2.dot(W2.T)
dZ1 = dA1 * relu_derivative(Z1)
dW1 = (X.T.dot(dZ1)) / m
db1 = np.sum(dZ1, axis=0, keepdims=True) / m
grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2}
return grads
# ==============================
# Step 7: Parameter Update
# ==============================
def update_parameters(params, grads, learning_rate=0.01):
for key in params:
params[key] -= learning_rate * grads["d" + key]
return params
# ==============================
# Step 8: Training Loop
# ==============================
def train(X, Y, input_dim=3, hidden_dim=4, output_dim=1, epochs=1000, learning_rate=0.1):
params = initialize_parameters(input_dim, hidden_dim, output_dim)
losses = []
for epoch in range(epochs):
A2, cache = forward_propagation(X, params)
loss = compute_loss(A2, Y)
grads = backward_propagation(X, Y, params, cache)
params = update_parameters(params, grads, learning_rate)
if epoch % 100 == 0:
print(f"Epoch {epoch:4d} | Loss: {loss:.4f}")
losses.append(loss)
return params, losses
# ==============================
# Step 9: Prediction and Accuracy
# ==============================
def predict(X, params):
A2, _ = forward_propagation(X, params)
return (A2 > 0.5).astype(int)
def accuracy(y_true, y_pred):
return np.mean(y_true == y_pred.flatten()) * 100
# ==============================
# Step 10: Main Execution
# ==============================
if __name__ == "__main__":
# Generate dataset
X, y = generate_data(samples=200, features=3)
# Train network
params, losses = train(X, y, epochs=1000, learning_rate=0.1)
# Evaluate
y_pred = predict(X, params)
acc = accuracy(y, y_pred)
print(f"\nTraining Accuracy: {acc:.2f}%")
# Display sample predictions
print("\nSample Predictions (first 10):")
print("Predicted:", y_pred[:10].flatten())
print("Actual: ", y[:10])
How to Run It
1️⃣ Save as binary_nn_numpy.py
2️⃣ Run in terminal:
python binary_nn_numpy.py
You’ll see:
Epoch 0 | Loss: 0.6931
Epoch 100 | Loss: 0.5638
Epoch 200 | Loss: 0.4599
...
Training Accuracy: 95.00%
You Can Extend This
-
Replace data with real datasets (e.g., diabetes.csv).
-
Add layers (just expand forward/backward with loops).
-
Use Adam optimizer for faster convergence.
-
Visualize loss with Matplotlib.
Sponsor Key-Word
"This Content Sponsored by SBO Digital Marketing.
Mobile-Based Part-Time Job Opportunity by SBO!
Earn money online by doing simple content publishing and sharing tasks. Here's how:
Job Type: Mobile-based part-time work
Work Involves:
Content publishing
Content sharing on social media
Time Required: As little as 1 hour a day
Earnings: ₹300 or more daily
Requirements:
Active Facebook and Instagram account
Basic knowledge of using mobile and social media
For more details:
WhatsApp your Name and Qualification to 9994104160
a.Online Part Time Jobs from Home
b.Work from Home Jobs Without Investment
c.Freelance Jobs Online for Students
d.Mobile Based Online Jobs
e.Daily Payment Online Jobs
Keyword & Tag: #OnlinePartTimeJob #WorkFromHome #EarnMoneyOnline #PartTimeJob #jobs #jobalerts #withoutinvestmentjob"



Comments
Post a Comment