Gregor Schubert
banner
grayshoebird.bsky.social
Gregor Schubert
@grayshoebird.bsky.social
Asst. Prof. of Finance @ UCLA Anderson || AI, Urban, Real Estate, Corporate Finance || 🇩🇪 he, his || Previously: HBS, BCG, Princeton

https://sites.google.com/view/gregorschubert
"Family firms" talks about various non-monetary benefits that owners can derive from firms - which presumably mean they don't optimize prices if those conflict with their other objectives. I guess Holmstrom multi-tasking + multiple objectives imply similar things?
March 6, 2025 at 9:22 PM
We find non-RE wealth increases of HHs with high HO affinity limited to those in the right housing markets at the right time & who happen to see high price increases, NOT a general consequence of homeownership - so housing policy is limited if housing booms are not guaranteed!
February 28, 2025 at 10:38 PM
We obtain restricted HRS data to see if affinity for HO impacts the portfolios of foreign-born retirees: as expected, they are more likely to own a home & own more RE in their portfolios. Total non-RE retirement wealth is also higher for those who have high HO affinity in origin country! But why?...
February 28, 2025 at 10:38 PM
We show that AFFINITY matters for housing cycles and effects of credit supply shocks. High HO affinity households enter more into homeownership during 2000s housing boom, default less during the GFC - see paper for causal evidence on greater response to credit supply shocks.
February 28, 2025 at 10:38 PM
It's hard to find exogenous changes in homeownership (HO) to study effects on HH finance. We build on literature looking at role of experiences/origins driving financial choices and show that HO in origin countries (HOCO) drives HO of foreign-born in the US! (15% passthrough)
February 28, 2025 at 10:38 PM
🚨 New working paper with Caitlin Gorback!

We ask what happens when households are more likely to WANT to own a home for cultural reasons? We find homeownership increases, they're more responsive to credit supply shocks, and more of their retirement portfolios are in real estate. 🧵
February 28, 2025 at 10:38 PM
I am worried LLM researchers sometimes bury the lede with regard to "should we trust these systems". Framing below is: LLMs are failing to "earn human trust". But it turns out it's the humans who cannot be trusted - even seeing the LLM's answer, the humans do worse than the LLM!
January 21, 2025 at 6:35 PM
To me this is one reason that good UX for LLM-based applications is important - users need to be guided by the designer to be able to quickly figure out "what is this good at" and "what is this not good at" - there is no time to validate all use cases for each chatbot encountered in the wild!
January 3, 2025 at 12:21 AM
This thread was triggered by this great paper by @keyonv.bsky.social , Ashesh Rambachan, and @sendhil.bsky.social about how humans become overoptimistic about model capabilities after seeing performance on a small number of tasks.
January 3, 2025 at 12:21 AM
Only after repeated use and exploration for where the weaknesses and pitfalls lie, and in which cases the LLM output can (with guardrails!) be trusted, does the user's expectation for LLM capabilities reach the "Plateau of Pragmatism".

BOTH the "Valley" and the "Mountain" are problematic places!
January 3, 2025 at 12:21 AM
...but after some experimenting (@emollick.bsky.social suggests 10+ hours), many people find amazing abilities in some areas where LLMs exceed humans, reaching bedazzlement on "Magic Box Mountain".

However, their "jagged frontier" nature means that LLMs would fail on many other "easy" use cases.
January 3, 2025 at 12:21 AM
Let me try to formalize some thoughts about Gen AI adoption that I have had, which I will call "The Bedazzlement Curve".

Most people still underestimate how useful Gen AI tools would be if they tried to find use cases and overestimate the issues - they're in "The Valley of Stochastic Parrots".
January 3, 2025 at 12:21 AM
Reporting on AI adoption rates tends to show the importance of having priors.
January 1, 2025 at 7:00 PM
With regard to whether our firm-level Generative AI exposure measure predicts ACTUAL adoption - my forthcoming research shows it does!

See below - our exposure measure with 2022 data strongly predicts whether firms mention Gen AI skills in their job postings in 2024. 2/2
December 11, 2024 at 1:24 AM
Was very surprised to stumble across a graph from my own research in a presentation by Benedict Evans today!

He makes the fair point that predicting technology effects is hard! Although I prefer to call our analysis "bottom-up" as it builds from microdata to a firm level exposure measure. 1/2
December 11, 2024 at 1:24 AM
It's incredibly encouraging that even models for analytical purposes, like o1, can recite Shakespeare.

This means that there are still many "storage" parameters not fine-tuned for analytics and means that distillation can get large performance improvements at smaller model size.
December 11, 2024 at 12:05 AM
I had to tell a Ph.D. student today what o1 is.

"The future is already here, it's just not evenly distributed"
December 10, 2024 at 11:57 PM
I also realized that assessing "feasibility" might need more details, so I created the "HIRES" framework for thinking about feasibility of GenAI use for a task.

I would love to hear feedback, or pointers to better write-ups of this kind, as I want to make this useful for my students!

2/2
December 2, 2024 at 8:37 PM
How can managers identify GenAI use cases?

I was struggling to find a good framework to teach my MBA students how to find GenAI use cases in their orgs - so I made my own!

I called it the "BEAST" framework for finding LLM use cases - see details below.
December 2, 2024 at 8:37 PM
Happy ChatGPT Day to those who celebrate it!

This birthday AI is being very demure, very mindful, even on its special day.
November 30, 2024 at 7:00 AM
The quotes are from an underrated recent paper by Faia, Lafitte, Mayer and Ottaviano, from this free book on Robots & AI by Ying & Grossmann: www.taylorfrancis.com/books/oa-edi...
November 25, 2024 at 8:50 PM
...and that explanation feels relevant today as it aligns with anecdotal evidence that tech firms are hiring cautiously while at the same time looking desperately for skilled workers who can manage the new technologies!
November 25, 2024 at 8:50 PM
Does Generative AI upskill or downskill the jobs that are affected? One perspective (not necessarily the correct one!) is that of the "paradox of automation" where workers need MORE training and specialized skills as those tasks that cannot be automated are the specialized ones...
November 25, 2024 at 8:50 PM
One good use case of LLMs for research that I have found: rapidly going deeper on existing literature reviews to find the papers most relevant to me.

Steps:
1. Paste in lit review
2. Ask for web search on all the papers, to get titles and abstracts
=> Saves me lots of separate searches
November 20, 2024 at 8:24 PM
The actual usage pattern of Generative AI look very different from a simple "automation = bad for workers" perspective: firms often use Gen. AI to ENHANCE worker capabilities - we should be interested in how this leads to a restructuring of workplaces and the assignment of different tasks to jobs.
November 20, 2024 at 8:15 PM