ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15070
6
0

Toward a Lightweight, Scalable, and Parallel Secure Encryption Engine

18 June 2025
Rasha Karakchi
Rye Stahle-Smith
Nishant Chinnasami
Tiffany Yu
ArXiv (abs)PDFHTML
Main:6 Pages
8 Figures
Bibliography:1 Pages
1 Tables
Abstract

The exponential growth of Internet of Things (IoT) applications has intensified the demand for efficient, high-throughput, and energy-efficient data processing at the edge. Conventional CPU-centric encryption methods suffer from performance bottlenecks and excessive data movement, especially in latency-sensitive and resource-constrained environments. In this paper, we present SPiME, a lightweight, scalable, and FPGA-compatible Secure Processor-in-Memory Encryption architecture that integrates the Advanced Encryption Standard (AES-128) directly into a Processing-in-Memory (PiM) framework. SPiME is designed as a modular array of parallel PiM units, each combining an AES core with a minimal control unit to enable distributed in-place encryption with minimal overhead. The architecture is fully implemented in Verilog and tested on multiple AMD UltraScale and UltraScale+ FPGAs. Evaluation results show that SPiME can scale beyond 4,000 parallel units while maintaining less than 5\% utilization of key FPGA resources on high-end devices. It delivers over 25~Gbps in sustained encryption throughput with predictable, low-latency performance. The design's portability, configurability, and resource efficiency make it a compelling solution for secure edge computing, embedded cryptographic systems, and customizable hardware accelerators.

View on arXiv
@article{karakchi2025_2506.15070,
  title={ Toward a Lightweight, Scalable, and Parallel Secure Encryption Engine },
  author={ Rasha Karakchi and Rye Stahle-Smith and Nishant Chinnasami and Tiffany Yu },
  journal={arXiv preprint arXiv:2506.15070},
  year={ 2025 }
}
Comments on this paper