AI + FM papers
banner
ai-fm-papers.bsky.social
AI + FM papers
@ai-fm-papers.bsky.social
A feed of interesting AI / math / formal methods papers. Posts by @m-dodds.bsky.social
Automated Proof Generation for Rust Code via Self-Evolution

“we introduce SAFE, a novel framework that overcomes the lack of human-written proof […] [SAFE achieves] a 70.50% accuracy rate in a benchmark crafted by human experts, [vs] GPT-4o's performance of 24.46%”

arxiv.org/abs/2410.15756
Automated Proof Generation for Rust Code via Self-Evolution
Ensuring correctness is crucial for code generation. Formal verification offers a definitive assurance of correctness, but demands substantial human effort in proof construction and hence raises a pre...
arxiv.org
April 12, 2025 at 10:59 PM
Can LLMs Enable Verification in Mainstream Programming?

“… we explore the ability of LLMs to produce verified code in three verification languages (Dafny, Nagini, and Verus) […] we use manually curated datasets derived from the state-ofthe-art Python benchmark, HumanEval”

arxiv.org/abs/2503.14183
Can LLMs Enable Verification in Mainstream Programming?
Although formal methods are capable of producing reliable software, they have seen minimal adoption in everyday programming. Automatic code generation using large language models is becoming increasin...
arxiv.org
April 12, 2025 at 10:51 PM
Formal Verification is Overrated

“Zac Hatfield-Dodds [argues] that relying solely on verification methods may not provide real AI safety”

youtu.be/bs5snugP1VA?...
Zac Hatfield-Dodds – Formal Verification is Overrated [Alignment Workshop]
YouTube video by FAR․AI
youtu.be
February 18, 2025 at 6:51 AM
Proving the Coding Interview: A Benchmark for Formally Verified Code Generation

“We introduce the Formally Verified Automated Programming Progress Standards, or FVAPPS, a benchmark of 4715 samples […] including 1083 curated and quality controlled samples”

arxiv.org/abs/2502.05714
Proving the Coding Interview: A Benchmark for Formally Verified Code Generation
We introduce the Formally Verified Automated Programming Progress Standards, or FVAPPS, a benchmark of 4715 samples for writing programs and proving their correctness, the largest formal verification ...
arxiv.org
February 12, 2025 at 1:44 AM
Reposted by AI + FM papers
Super excited: my new @darpa program on AI for pure mathematics!

Exponentiating Mathematics (expMath) aims to accelerate the rate of progress in pure math through the development of an AI collaborator and new professional-level math benchmarks.

sam.gov/opp/4def3c13...
February 7, 2025 at 4:58 PM
LLM-Assisted Static Analysis for Detecting Security Vulnerabilities

"[We combine] LLMs with static analysis to perform whole-repository reasoning for security vulnerability detection. [...] IRIS leverages LLMs to infer taint specifications and perform contextual analysis"

arxiv.org/abs/2405.17238
LLM-Assisted Static Analysis for Detecting Security Vulnerabilities
Software is prone to security vulnerabilities. Program analysis tools to detect them have limited effectiveness in practice due to their reliance on human labeled specifications. Large language models...
arxiv.org
February 1, 2025 at 1:34 AM
AlphaVerus: Bootstrapping Formally Verified Code Generation through Self-Improving Translation and Treefinement

"AlphaVerus [is] a self-improving framework that bootstraps formally verified code generation by iteratively translating programs from a higher-resource language

arxiv.org/abs/2412.06176
AlphaVerus: Bootstrapping Formally Verified Code Generation through Self-Improving Translation and Treefinement
Automated code generation with large language models has gained significant traction, but there remains no guarantee on the correctness of generated code. We aim to use formal verification to provide ...
arxiv.org
January 14, 2025 at 8:29 PM
VerifAI: AI Verification in the Wild @ ICLR 2025

"This workshop explores the intersection of scale-driven generative artificial intelligence (AI) and the correctness-focused principles of verification."

verifai-workshop.github.io
The VerifAI Workshop
VerifAI: AI Verification in the Wild @ ICLR 2025
verifai-workshop.github.io
January 9, 2025 at 6:18 AM
Laurel: Generating Dafny Assertions Using Large Language Models

"...we propose Laurel, a tool that uses LLMs to automatically generate helper assertions for Dafny [...] Laurel is able to generate over 50% of the required helper assertions given only a few attempts"

arxiv.org/abs/2405.16792
Laurel: Generating Dafny Assertions Using Large Language Models
Dafny is a popular verification language, which automates proofs by outsourcing them to an SMT solver. This automation is not perfect, however, and the solver often requires guidance in the form of he...
arxiv.org
December 18, 2024 at 5:38 PM
Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs - “we introduce […] a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems”
arxiv.org/abs/2210.12283
Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs
The formalization of existing mathematical proofs is a notoriously difficult process. Despite decades of research on automation and proof assistants, writing formal proofs remains arduous and only acc...
arxiv.org
December 10, 2024 at 1:28 AM
“Today, we’re adding Automated Reasoning checks (preview) as a new safeguard in Amazon Bedrock Guardrails to help you mathematically validate the accuracy of responses generated by large language models (LLMs)”
aws.amazon.com/blogs/aws/pr...
Prevent factual errors from LLM hallucinations with mathematically sound Automated Reasoning checks (preview) | Amazon Web Services
Enhance conversational AI accuracy with Automated Reasoning checks - first and only gen AI safeguard that helps reduce hallucinations by encoding domain rules into verifiable policies.
aws.amazon.com
December 9, 2024 at 6:07 PM
Reposted by AI + FM papers
Since all my Twitter content is now gone, I will start reposting some of it here. Here are the slides for my talk on the coming wave of ML-accelerated formal methods, given at the Isaac Newton Institute last month. May interest some of you.
drive.google.com/file/d/1ybQx...
November 29, 2024 at 2:37 PM
Big Sleep: Google’s Proj Zero team find a real vulnerability in SQLite using an LLM-based agent googleprojectzero.blogspot.com/2024/10/from...
From Naptime to Big Sleep: Using Large Language Models To Catch Vulnerabilities In Real-World Code
Posted by the Big Sleep team Introduction In our previous post, Project Naptime: Evaluating Offensive Security Capabilities of Large L...
googleprojectzero.blogspot.com
November 30, 2024 at 9:22 PM
"Towards Neural Synthesis for SMT-Assisted Proof-Oriented Programming" - LLM-based proof generation for F*, and a 600K LoC dataset of F* programs and proofs, suitable for ML applications. Impressive results synthesizing real-world proofs about programs!
arxiv.org/abs/2405.01787
Towards Neural Synthesis for SMT-Assisted Proof-Oriented Programming
Proof-oriented programs mix computational content with proofs of program correctness. However, the human effort involved in programming and proving is still substantial, despite the use of Satisfiabil...
arxiv.org
November 26, 2024 at 10:06 PM
Grammar-Aligned Decoding - "[We propose] a decoding algorithm that guarantees the output to be grammatical while provably producing outputs that match the conditional probability of the LLM's distribution conditioned on the given grammar constraint" arxiv.org/abs/2405.21047 @lorisdanto.bsky.social
Grammar-Aligned Decoding
Large Language Models (LLMs) struggle with reliably generating highly structured outputs, such as program code, mathematical formulas, or well-formed markup. Constrained decoding approaches mitigate t...
arxiv.org
November 25, 2024 at 11:46 PM
Arithmetic Without Algorithms: Language Models Solve Math With a Bag of Heuristics arxiv.org/abs/2410.21272
Arithmetic Without Algorithms: Language Models Solve Math With a Bag of Heuristics
Do large language models (LLMs) solve reasoning tasks by learning robust generalizable algorithms, or do they memorize training data? To investigate this question, we use arithmetic reasoning as a rep...
arxiv.org
November 24, 2024 at 7:40 PM