80

Logical GANs: Adversarial Learning through Ehrenfeucht Fraisse Games

Main:10 Pages
Bibliography:2 Pages
4 Tables
Abstract

GANs promise indistinguishability, logic explains it. We put the two on a budget: a discriminator that can only ``see'' up to a logical depth kk, and a generator that must look correct to that bounded observer. \textbf{LOGAN} (LOGical GANs) casts the discriminator as a depth-kk Ehrenfeucht--Fraïssé (EF) \emph{Opponent} that searches for small, legible faults (odd cycles, nonplanar crossings, directed bridges), while the generator plays \emph{Builder}, producing samples that admit a kk-round matching to a target theory TT. We ship a minimal toolkit -- an EF-probe simulator and MSO-style graph checkers -- and four experiments including real neural GAN training with PyTorch. Beyond verification, we score samples with a \emph{logical loss} that mixes budgeted EF round-resilience with cheap certificate terms, enabling a practical curriculum on depth. Framework validation demonstrates 92%92\%--98%98\% property satisfaction via simulation (Exp.~3), while real neural GAN training achieves 5%5\%--14%14\% improvements on challenging properties and 98%98\% satisfaction on connectivity (matching simulation) through adversarial learning (Exp.~4). LOGAN is a compact, reproducible path toward logic-bounded generation with interpretable failures, proven effectiveness (both simulated and real training), and dials for control.

View on arXiv
Comments on this paper