Kevin A. Bryan
afinetheorem.bsky.social
Kevin A. Bryan
@afinetheorem.bsky.social
Assoc Professor of Strategic Management, University of Toronto; Chief Economist, Creative Destruction Lab Toronto; cofounder, AllDayTA; cofounder, NBER Innovation PhD Boot Camp. http://www.kevinbryanecon.com and @AFineTheorem on Twitter
For students, *I don't care* if they get right answers. Why? If wrong, they have to explain to AI what they were thinking before moving on. Cheating doesn't save time b/c I don't grade on correctness, just whether you work through the quiz! Try it here app.alldayta.com/university-o... 3/4
October 6, 2025 at 8:43 PM
Toss my lecture audio, slides, handouts into a module. A crazy AI workflow pulls out learning goals. AI then spins up questions (you can approve or not) from your docs plus context on why students might get them wrong. 2/4
October 6, 2025 at 8:43 PM
I know it's my company, but All Day TA AI-driven quizzes are so good. Student cheat on all take-home work. How do you get them to learn? Do even better than we used to by having them learn *as they do low-staked hw*. Here's use just this week in a Texas univ course - students really use this. 1/4
October 6, 2025 at 8:43 PM
New class on Progress starting tomorrow - I'm amped! Trying to put some rigor from economics, economic history, and philosophy on a topic very much in the air. It will be awesome.

(And first class running slides in my all html browser-based slideshow program - details soon!) 1/2
September 17, 2025 at 5:48 AM
Perhaps of interest to folks with social science PhD programs: at Rotman, we added an experimental 3 session "tech stack" training in addition to the math boot camp. My lecture was "how to do reproducible, open, quick research", aka version control, LaTeX, AI. 1/2 kevinbryanecon.com/techstack.html
September 8, 2025 at 8:24 PM
PS - An awesome dev of ours was testing the featureand told me "I got it wrong on purpose at first for testing, but then forgot to divide by 2 for expected value until the system brought me there!" Exactly. Imagine this help for the student, and then summed up & reported back to you for each hw!
September 5, 2025 at 7:51 PM
When the students do their assignment, and get a question wrong, the AI forces them to explain their logic, then uses your lectures, handouts, and so on to try to correct mistakes. We then use another AI system to report back to you precisely where students have been going wrong *and why*. 4/6
September 5, 2025 at 7:15 PM
That's exactly what we built. Our system already interpreted the learning goals of your course, topic by topic. For question banks, we propose these using our AI, and once you edit and approve, we spin up question banks of varying difficulty. You can manually add, edit or kill these, of course. 3/6
September 5, 2025 at 7:15 PM
I know lots of skepticism about AI here, but let me show you something we put out which I think is a huge improvement for university assignments. This is "Intelligent Quiz", a feature on All Day TA (www.alldayta.com). Assignments now have tons of cheating + little feedback to us or the students. 1/9
September 5, 2025 at 7:15 PM
I'd separate Trump from general public trust (incl. outside US); some unique features there. Even there, though, note that Senate NIH appropriations are up for 2026. I think the medical-related field with the trust deficit is public health, for reasons 1-8. But also - all this below was a mistake.
September 2, 2025 at 2:48 AM
Eight rules to regain trust in universities. All can be done today. They're absolutely not the core beliefs about the role of the university for some faculty right now, but are essential if we're to contribute to the trusted production and diffusion of knowledge. kevinbryanecon.com/trust.html
September 1, 2025 at 10:31 PM
Getting "big" things done and maintaining existing things are not the same. But why? What causes path dependence? What role do liberty, or particular individuals, or cities, or science, or incentives play? 2/3
August 25, 2025 at 5:43 PM
Incredibly excited for my brand new class at Rotman this fall: Progress! Econ history + theory + history of thought + philosophy on why rare orgs at rare times in rare places accomplish new things. Trying to put rigor onto an idea that is very much in the air. kevinbryanecon.com/Bryan-Progre... 1/3
August 25, 2025 at 5:43 PM
To create the XML, just drag and drop pdf or latex/bib into the browser, wait a minute, and you have things all set. Occasionally a couple minor manual edits needed, and you need to place your figures in the same folder. Otherwise, that's it - just upload to your website! 4/5
August 8, 2025 at 7:17 PM
Because the entire thing is just raw XML, it's easy to build AI tools on top. I have two: an auto "layman's summary", and a really good AI chat with sourcing. Here, it does a calc I don't explicitly do in the paper. And it's using only free models! 3/5
August 8, 2025 at 7:17 PM
This code, which runs on one .html file, takes your pdf (or latex/bib), rewrites the whole thing in a pure text XML format, adds in-line references and footnotes, zoomable images, and retains clickable navigations. Plus prettier, and adapts view for phones. But also... 2/5
August 8, 2025 at 7:17 PM
I've built a bunch of tools this summer to move my whole workflow to using things that are "pure text" so I can use AI on top. Here's the first: a modern paper reader. PDFs suck. Ugly, terribly interface, big files, fixed. I want papers that are adaptive, interactive, pretty. 1/5
August 8, 2025 at 7:17 PM
I'm worried about universities. We *have to* be seen as neutral, truth-seeking, unbiased, rigorous, or public support will continue to fall. Floating around today was Columbia's core curriculum "Contemporary Civ". Here's the entire post-1865 reading list www.college.columbia.edu/core-curricu... 1/10
July 9, 2025 at 3:06 PM
Editing our writing is a common academic task. I *hate* how spell/grammar checks annoy me by thinking proper nouns or LaTeX code are misspellings (yes, I want `` ''). I also want *style* tips like a good editor would give me. Problem solved. 1/5
June 12, 2025 at 9:55 PM
Here is Harvard. (Many) more faculty "very liberal" than anything from moderate to right. And 23% "moderate or conservative" almost all hard science. How is this sustainable? CEOs, industry researchers, top doctors, even Harvard students (2nd image): all smart, look *nothing* like this. 5/6
May 23, 2025 at 3:45 PM
80 images (plus a holdout set), all not online, from private collections, some 20 years old, globally distributed, all metadata removed. 95% have no text or road markings of any kind. 2/x
May 12, 2025 at 8:54 PM
"Where in the world is this place" is a surprising skill of multimodal LLMs. But as Geoguessr fans know, road markings, words on signs, etc. make it much easier. I made my own benchmark to fix this. 1/x kevinbryanecon.com/HardGeoBench/
May 12, 2025 at 8:54 PM
In the Star today (Canada's biggest paper) on why Canada can't redirect trade away from the US without making the country much poorer, why the (many) attempts to do this historically failed, and what to do instead: www.thestar.com/news/insight...
May 4, 2025 at 5:10 PM
A good measure of Abundance/Progress: how upset are you about reg restrictions slowing self-driving rollout? Self-drive cars drop serious crashes by 80%+, even though remainder almost entirely are caused by others. Waymo's swerve away from even many of these! www.understandingai.org/p/human-driv...
March 31, 2025 at 9:21 PM
I cut video early due to private info in the final pdf, but what it gives you is a dozens of pages pdf identical to what I'd have spent a couple hours doing manually. Also: 1st project where I didn't manually write a line of code. 3 LLMs, I PRed each part & asked for corrections.
March 20, 2025 at 3:00 PM