Tutorial

Image Classification Without Neural Networks: A Practical Guide

Image Classification Without Neural Networks: A Practical Guide

Introduction

Image classification mainly relies on deep learning algorithms, especially Convolutional Neural Networks (CNNs), as the mainstream approach. However, many other tools exist beyond these advanced neural network architectures.

Image classification without neural networks can be performed through traditional machine learning methods, including Support Vector Machines (SVM), k-nearest Neighbors (KNN), Decision Trees, and custom feature engineering.

This guide outlines the steps to build a reliable image classification pipeline using feature extraction techniques. It compares classical machine learning algorithms and demonstrates Python implementation on a real-world dataset.

Prerequisites

  • A fundamental understanding of core ML concepts, including supervised learning, model training, and evaluation metrics such as accuracy and F1-score.
  • Knowledge of numerical image representations such as pixel grids, grayscale, and RGB formats.
  • Comfortable with Python and its libraries NumPy, scikit-learn, and OpenCV.
  • Understanding the extraction process of features like edges and shapes from raw data.
  • Familiarity with classical algorithms like SVM, KNN, and Decision Trees.

Setting Up the Problem: Supervised Classification Basics

Fundamentally, image classification is a supervised classification task. Usually, the dataset must contain images individually assigned to one of the k-different classes. The objective is to develop a model that can accurately predict the label of unseen images. When dealing with such a project, you must take the following components into account:

  • Data: Each image in the dataset comes with its correct label as the ground truth class.
  • Features: Each image must be converted into a numeric format that models can process for image classification.
  • Learning Algorithm: The learning algorithm (which includes SVM, Decision Trees, KNN, etc) must be used to map extracted features to their corresponding labels.
  • Evaluation: You must evaluate the trained model’s ability to generalize to unseen images, using metrics such as accuracy, precision, recall, and F1-score(See our article on Deep Learning Metrics for more on performance metrics).

The essential difference between classical machine learning and deep learning in this pipeline arises from who handles feature extraction tasks. CNNs( in deep learning systems) extract features from data without manual intervention. Classical machine learning requires users to engineer features manually for the model.

Feature Engineering in Traditional Image Classification Methods

Image classification without neural networks relies on feature engineering as its fundamental component. The process requires the extraction of numerical descriptors that capture relevant information about the image. The table below displays the typical feature-based image classification techniques:

Feature Type Methods/Techniques Description Use Case
Color Features Color Histograms Color Moments Dominant Colors Color Histograms: Frequency distributions of pixel intensities in color channels. Color Moments: Statistical summaries like mean, variance, and skewness of color channels. Dominant Colors: The most visually prominent colors in an image. Detecting product packaging in retail images Identifying ripe vs. unripe fruits Scene classification based on color (e.g., desert vs. forest)
Texture Features Gray Level Co-occurrence Matrix (GLCM) Local Binary Patterns (LBP) Gabor Filters GLCM: Captures texture by measuring pixel-pair spatial relationships. LBP: Encodes local texture by comparing each pixel to its neighbors. Gabor Filters: Extract frequency and orientation details, mimicking human vision. Texture classification of fabrics or surfaces Face recognition and fingerprint matching Medical imaging analysis (e.g., tumor texture)
Shape Features Contours Moments Shape Descriptors (circularity, convexity, aspect ratio) Contours: Boundaries that define object outlines. Moments: Statistical metrics capturing shape distribution. Shape Descriptors: Quantitative descriptions of geometric properties. Object detection in autonomous driving (e.g., vehicles, pedestrians) Leaf classification in botany Tool recognition in manufacturing
Edge Detection Canny Edge Detector Sobel Operator Laplacian Operator Canny: Multi-stage detector providing clean, thin edges with noise suppression. Sobel: Gradient-based method for highlighting horizontal and vertical edges. Laplacian: Detects edges using second-order derivatives to highlight intensity changes. Barcode and QR code detection Medical edge segmentation (e.g., bone boundaries in X-rays) Document layout analysis
Keypoint Features SIFT (Scale-Invariant Feature Transform) SURF (Speeded-Up Robust Features) KAZE Features SIFT: Detects and describes robust key points invariant to scale and rotation. SURF: Optimized for speed and suitable for real-time keypoint matching. KAZE: Detects key points respecting natural image boundaries using nonlinear scale spaces. Panorama stitching Object tracking in videos Robot navigation via landmark detection

Some Texture Feature Extraction Techniques

Let’s explore some popular feature extraction techniques and how they work:

Histogram of Oriented Gradients (HOG)

HOG captures local object appearance and shape through gradient directions:

image

Source image

  1. Start by dividing the image into small, connected sections called cells.
  2. Next, you’ll create a histogram showing the gradients’ different directions for each of these cells.
  3. Once you have those histograms, normalize them across blocks of cells.
  4. Finally, all these histograms are combined to create a feature descriptor.

HOG is particularly effective for object detection, especially for rigid objects with well-defined shapes.

Local Binary Patterns(LBP)

Local Binary Patterns capture texture descriptors by comparing each pixel with its neighbors:

image

Source image

  • In the example above, we take a 3x3 pixel block from a grayscale image.
  • Next, we compare each surrounding pixel to the center pixel (number 5). If it’s greater than or equal to that center pixel, we note it as 1; otherwise, we mark it as 0.
  • This process creates an 8-digit binary number positioned around the center pixel.
  • Then, we multiply these binary numbers by their respective weights(powers of 2).
  • The sum of those values gives us the LBP value (in this case, 1 + 4 + 16 + 32 equals 53).

The Local Binary Patterns method proves to be a computationally simple yet powerful approach for classifying textures.

Scale-Invariant Feature Transform (SIFT)

SIFT creates feature descriptors that are invariant to scale, rotation, and illumination changes:

image

Source image

  1. Start by spotting potential interest points using the difference-of-Gaussian function.
  2. Then, localize the key points by eliminating low-contrast points and edge responses that could muddy the results.
  3. Next, assign an orientation to each key point based on the local gradients in the image.
  4. Finally, generate descriptors that capture the gradient info around each key point.

The SIFT method maintains strong performance yet requires more computational resources than some alternatives.

These approaches establish the foundation for image recognition without machine learning because they allow manual feature design. However, you’d still apply classical machine learning techniques to classify the features after extraction.

Traditional Machine Learning Algorithms for Image Classification

Once features are extracted, various machine-learning algorithms can be applied for classification. Let’s explore some of them:

Support Vector Machines (SVM)

SVM for image classification determines the best hyperplane that maximizes the margin between different classes.

image

The illustration displays red and blue point classes separated by a hyperplane. The dashed lines indicate the margin around the hyperplane, while support vectors appear as circled points.
Image classification tasks perform better with SVMs when they are used with effective feature extraction techniques.

Pros of SVM

  • Works well in high-dimensional spaces.
  • Good at managing non-linear boundaries with kernel functions.
  • Strong against overfitting if you tune the parameters carefully (think C and kernel settings).

Cons of SVM

  • It can be computationally intensive with massive datasets, especially using complex kernels.
  • Getting the right parameters can be tricky and time-consuming.

K-Nearest Neighbors (KNN)

KNN image classification works by classifying images based on the majority class among its K-closest neighbors in the feature space:

image

  1. First, calculate the distance (Euclidean, Manhattan, etc.) between the features of the test image and those of all the training images.
  2. Next, select the K closest images.
  3. Assign the class that appears most frequently among these neighbors.

Pros of KNN

  • It’s simple to implement.
  • The implementation does not require a training phase.
  • It works efficiently if the feature representation is discriminative.

Cons of KNN

  • The prediction process becomes slow for large datasets (because it searches for neighbors at query time).
  • This algorithm requires substantial memory because it stores all of the training data.
  • The choice of k and the distance metric significantly affects the performance.

Decision Trees

The classification process in decision trees involves recursive partitioning of the feature space to reach decisions:

  1. First, select the feature that does the best job of splitting the data.
  2. Then, create child nodes based on that split.
  3. Keep repeating recursively until stopping criteria are met.

Pros of Decision Trees

  • The results produced are highly interpretable since you can visualize the tree structure.
  • The model quickly trains and predicts data when working with moderate-size datasets.
  • Handle both numerical and categorical features well.

Cons of Decision Trees

  • It can overfit easily if not pruned.
  • Single decision trees might not reach the best accuracy levels(but ensemble methods such as Random Forests usually produce impressive results).

These image classification algorithms operate without deep learning and can achieve optimal results when combined with robust feature engineering.

Practical Implementation with Python

This tutorial will address traditional image classification by demonstrating feature extraction and classification methods using scikit-learn and OpenCV. We will use the Fashion MNIST dataset.

Import the libraries

The script starts its execution by importing various essential libraries:

import numpy as np
from tensorflow.keras.datasets import fashion_mnist
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
from sklearn.base import BaseEstimator, TransformerMixin
from skimage.feature import hog, local_binary_pattern

Loads and preprocesses the Fashion MNIST dataset

# Load and prepare data
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images = train_images.astype('float32') / 255.0
test_images = test_images.astype('float32') / 255.0

After importing the libraries, we load the dataset. The image and label arrays are extracted from the training and test sets. The pixel values of these images, which initially range from 0 to 255, undergo normalization by division with 255.0 to achieve a [0, 1] range. This improves training performance and stability.

Extracts features using HOG (or LBP/Histogram)

class FeatureExtractor(BaseEstimator, TransformerMixin):
    def __init__(self, feature_type='hog'):
        self.feature_type = feature_type

    def fit(self, X, y=None):
        return self

    def transform(self, X, y=None):
        if self.feature_type == 'hog':
            return np.array([hog(img, orientations=8, pixels_per_cell=(4, 4),
                                 cells_per_block=(1, 1), visualize=False) for img in X])
        elif self.feature_type == 'lbp':
            return np.array([local_binary_pattern(img, P=8, R=1, method='uniform').flatten()
                             for img in X])
        elif self.feature_type == 'histogram':
            return np.array([np.histogram(img, bins=32, range=(0, 1))[0] for img in X])
        else:
            raise ValueError("Invalid feature type. Choose 'hog', 'lbp', or 'histogram'.")

The FeatureExtractor class is the core element of this code. It is compatible with scikit-learn pipelines, allowing developers to use this custom transformer. This class lets users get various image features based on the feature_type parameter specified during initialization.

If the feature type is set to ‘hog,’ the system transforms each image into a HOG feature vector. hog() comes from skimage.feature. It is used to compute histogram-oriented gradients. Its parameters are described as follows:

Parameter Meaning
img The input grayscale image (or RGB converted to grayscale)
orientations=8 Number of orientation bins to represent gradient direction (e.g., 0° to 180°)
pixels_per_cell=(4, 4) Each cell is 4×4 pixels; gradients are computed in each cell
cells_per_block=(1, 1) Each block (used for normalization) consists of 1×1 cell; i.e., no block normalization
visualize=False If True, also returns an image showing the HOG visualization

On the other hand, If the feature_type is ‘lbp,’ the system computes the image’s local binary pattern and flattens it into a one-dimensional array. local_binary_pattern() is used to compute Local Binary Patterns. The parameters are described as follows:

Parameter Meaning
img The input grayscale image
P=8 Number of circularly symmetric neighborhood sampling points
R=1 The radius of the circle (distance from the center pixel to neighbors)
method='uniform' Use the uniform LBP variant (fewer patterns, more robust features)

Finally, the selected ‘histogram’ option computes the grayscale histograms with 32 bins for each image.

Trains multiple classifiers in a scikit-learn pipeline

def create_pipeline(classifier, feature_type='hog'):
    return Pipeline([
        ('feature_extractor', FeatureExtractor(feature_type=feature_type)),
        ('scaler', StandardScaler()),
        ('pca', PCA(n_components=0.95)),
        ('classifier', classifier)
    ])

# Define classifiers
classifiers = {
    'SVM': SVC(kernel='rbf', C=10, gamma='scale', random_state=42),
    'KNN': KNeighborsClassifier(n_neighbors=5, weights='distance'),
    'Decision Tree': DecisionTreeClassifier(max_depth=10, random_state=42)
}

# Train and evaluate models
results = {}
for name, clf in classifiers.items():
    print(f"\nTraining {name}...")
    pipeline = create_pipeline(clf, feature_type='hog')  # You can change 'hog' to 'lbp' or 'histogram'
    pipeline.fit(train_images, train_labels)

    predictions = pipeline.predict(test_images)
    report = classification_report(test_labels, predictions, output_dict=True)
    results[name] = report

    print(f"{name} Results:")
    print(classification_report(test_labels, predictions))

The code above defines a helper function create_pipeline. This function accepts a classifier and a feature type as input parameters to build a scikit-learn pipeline. The pipeline performs four main steps:

  • The FeatureExtractor initially converts raw images into features.
  • These features are then normalized using standard scaling.
  • This is followed by PCA, which reduces dimensionality and maintains 95% of the data variance.
  • Finally, the selected classifier can operate on the transformed data.

The classification models include SVM with RBF kernel, a KNN classifier that uses distance-based voting, and a Decision Tree with a depth limit set at 10. A dictionary stores these models to simplify iteration. The training and evaluation process involves iterating through each classifier. The script displays each model name before building a pipeline with the HOG feature. Then, it trains the model and generates predictions on the test data.

Evaluate and compare models using accuracy and F1 score

# Compare model performance
print("\nModel Comparison:")
for model, metrics in results.items():
    print(f"{model}:")
    print(f"Accuracy: {metrics['accuracy']:.3f}")
    print(f"Macro Avg F1: {metrics['macro avg']['f1-score']:.3f}\n")

The accuracy and macro-averaged F1-score are displayed for each model. This final step enables a fast comparison of each classifier’s performance. The performance can be summarized in the following table:

Model Accuracy MacroAvg F1
SVM 0.891 0.891
KNN 0.845 0.845
Decision Tree 0.775 0.776

SVM Dominance

The SVM classification task delivered optimal results with 89.1% accuracy/F, which shows its strong effectiveness. This aligns with SVMs’ known strengths:

  • Manage high-dimensional data well ( after applying PCA).
    • RBF kernel produces effective results when working with non-linear decision boundaries.
    • Benefits from HOG features capturing texture/shape information.

KNN Performance

The 84.5% scores suggest:

  • Distance-based approaches achieve satisfactory results when applied to normalized features.
    • Could potentially improve with k-value tuning

Decision Tree Limitations

The 77.5-77.6% scores indicate:

  • Poor compatibility with gradient-based HOG features.
    • Oversimplification from max_depth=10 constraint.

Comparison: Machine Learning vs. Deep Learning for Image Classification

The choice between traditional machine learning and deep learning for image classification requires careful consideration of your constraints and goals.

Aspect Traditional ML Deep Learning
Data Requirement Often performs well on smaller datasets. Typically requires large labeled datasets.
Feature Extraction Manual and domain expertise (edges, textures) are needed. Automatic feature learning from raw pixels.
Computational Resources Less demanding (CPUs often enough). More demanding (GPUs/TPUs often required).
Interpretability Easier to interpret (especially trees). Often, it is a “black box,” though interpretability methods exist.
Performance Good baseline; might struggle with complex images. State-of-the-art results on large datasets.
Versatility Good for quick prototypes and simpler tasks. Dominates advanced tasks like object detection, segmentation, etc.

Use Cases and Practical Scenarios

Under what conditions should you choose image classification methods without neural networks? Here are some common scenarios:

image

That said, deep learning models achieve state-of-the-art performance if you have a large, diverse dataset and sufficient computational resources.

FAQ SECTION

Which method is best for image classification?
There is no universal “best” method. It relies on the data size, complexity of features, hardware resources, and interpretability requirements. Deep learning produces high-accuracy results when applied to large labeled datasets with enough computational power. Traditional image classification techniques such as SVM or Random Forests might be more practical when working with smaller datasets or when interpretability is required.

What is image classification using unsupervised learning?
Unsupervised learning aims to find hidden structures in unlabeled data by grouping images into clusters without known class labels. A practical application of clustering algorithms (like K-Means) is grouping images based on their similarities. However, this task cannot be considered strictly “classification” since classification requires labeled data. Unsupervised learning can help with pre-clustering, anomaly detection, etc., preparing data for subsequent supervised classification tasks.

What are the alternatives to CNN for image classification?
Alternatives include:

  • SVM, KNN, and Decision Trees.
  • Random Forests or other ensemble techniques can also be great options.
  • Gradient Boosting Machines (XGBoost, LightGBM) with custom image features.

While these methods serve as alternatives, their performance depends on the specific requirements of each image classification task.

Can images be classified without deep learning?
Absolutely. Before the rapid adoption of CNNs, image classification algorithms without deep learning techniques were the dominant industry standard. They remain relevant for handling smaller datasets and resource-constrained scenarios.

Are traditional image classification methods still relevant today?
Classic algorithms require less computational power and can be more straightforward to interpret. They perform well when features are engineered properly. These algorithms are often used in specialized areas, niche systems, and academic environments, where they serve to teach fundamental concepts in computer vision and machine learning.

Conclusion

A technique for image classification without neural networks appears outdated in our current AI-centric era. However, this approach maintains significant importance across many practical applications, particularly when data availability or computational power is limited or model transparency is essential. By combining engineered features such as edges, textures, and color histograms with classical machine learning methods, you can build robust, efficient, and interpretable models. It is generally advisable to evaluate both traditional machine learning methods and modern deep learning approaches to determine which one best meets your needs. You can explore advanced topics such as few-shot learning for scenarios with very limited data.

References and Resources

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the author(s)

Adrien Payong
Adrien PayongAI consultant and technical writer
See author profile
Category:
Tutorial

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
Leave a comment
Leave a comment...

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.