ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.07315
60
0

∂B\partial\mathbb{B}∂B nets: learning discrete functions by gradient descent

12 May 2023
Ian Wright
ArXivPDFHTML
Abstract

∂B\partial\mathbb{B}∂B nets are differentiable neural networks that learn discrete boolean-valued functions by gradient descent. ∂B\partial\mathbb{B}∂B nets have two semantically equivalent aspects: a differentiable soft-net, with real weights, and a non-differentiable hard-net, with boolean weights. We train the soft-net by backpropagation and then `harden' the learned weights to yield boolean weights that bind with the hard-net. The result is a learned discrete function. `Hardening' involves no loss of accuracy, unlike existing approaches to neural network binarization. Preliminary experiments demonstrate that ∂B\partial\mathbb{B}∂B nets achieve comparable performance on standard machine learning problems yet are compact (due to 1-bit weights) and interpretable (due to the logical nature of the learnt functions).

View on arXiv
Comments on this paper