ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.04568
  4. Cited By
StackSight: Unveiling WebAssembly through Large Language Models and
  Neurosymbolic Chain-of-Thought Decompilation

StackSight: Unveiling WebAssembly through Large Language Models and Neurosymbolic Chain-of-Thought Decompilation

7 June 2024
Weike Fang
Zhejian Zhou
Junzhou He
Weihang Wang
    LRM
ArXivPDFHTML

Papers citing "StackSight: Unveiling WebAssembly through Large Language Models and Neurosymbolic Chain-of-Thought Decompilation"

4 / 4 papers shown
Title
Extending Source Code Pre-Trained Language Models to Summarise
  Decompiled Binaries
Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binaries
Ali Al-Kaswan
Toufique Ahmed
M. Izadi
A. Sawant
Prem Devanbu
A. van Deursen
SyDa
98
32
0
04 Jan 2023
Boosting Neural Networks to Decompile Optimized Binaries
Boosting Neural Networks to Decompile Optimized Binaries
Ying Cao
Ruigang Liang
Kai Chen
Peiwei Hu
31
16
0
03 Jan 2023
Multi-lingual Evaluation of Code Generation Models
Multi-lingual Evaluation of Code Generation Models
Ben Athiwaratkun
Sanjay Krishna Gouda
Zijian Wang
Xiaopeng Li
Yuchen Tian
...
Baishakhi Ray
Parminder Bhatia
Sudipta Sengupta
Dan Roth
Bing Xiang
ELM
114
160
0
26 Oct 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
361
8,495
0
28 Jan 2022
1