πŸ”’

AIEcon Paper

Reading notes β€” please enter access password

βœ— Incorrect password

AIEcon Paper

Reading notes

πŸ“š NBER

πŸ“–Overview πŸ“Š15 Papers Table πŸ†Research Rankings

⭐ TopEcon

πŸ“‹Overview πŸ“ˆA Β· Productivity πŸ‘₯B Β· Labor Markets βš–οΈC Β· Policy πŸ”¬D Β· Methods 🧠E Β· Information

πŸ“Š AI Lit Review

πŸ”₯Top-Cited (Top-5 + NBER)

πŸ’‘ Idea Generation

πŸ“±App Market ⚑Shock πŸ“ŠScorecard

AI Literature Review Β· Top-5 + Business Journals + NBER Β· 2020-2026 Β· Ranked by Citation

Most-cited AI papers across Economics & Business

150 papers across Top-5 Economics (AER Β· QJE Β· JPE Β· Econometrica Β· REStud) + NBER WP + top business journals (Management Science Β· Mkt Sci Β· SMJ Β· Org Sci Β· J Finance Β· RFS Β· JFE Β· MIS Quarterly Β· ISR Β· POMS Β· J Marketing Β· TAR Β· JAR Β· ...), filtered to AI/ML/automation as the core topic, ranked by OpenAlex citation count (Google Scholar proxy) as of May 2026.

150
total papers
8
Top-5 econ
125
business / field journals
17
NBER WP
3,344
cites of #1

πŸ“Š The field at a glance

Each circle is one paper. Position tells you when it was published and how much it has been cited; color tells you which thread it belongs to. Click any circle for the paper's title and venue. The chart is built directly from the 150-paper dataset.

πŸ” How to read this chart

  • X-axis = publication year. Range 2017–2026, but most papers fall in 2020–2025 (the AI economics era proper).
  • Y-axis = citation count, on a log scale. A doubling on the y-axis corresponds to roughly 10Γ— more citations. The log scale is necessary because citation counts span four orders of magnitude (from ~20 to ~3,344).
  • Color = academic field. Nine fields: Economics (deep blue), Information Systems (purple), Marketing (pink), Strategy (green), Organizational Behavior (orange), Finance (cyan), Operations (lime), Accounting (amber), General Management (gray). Click any field in the legend to hide/show it.
  • Hover any circle to see the paper's title, authors, venue, citations, and field/topic classification.
  • What to look for. (i) The top of the chart shows the field's landmark papers. (ii) Vertical density in a given year shows when each field "broke out." (iii) Side-by-side color clusters reveal which fields are working on AI in parallel β€” but often citing each other less than you'd expect.
πŸ“š Narrative review: nine threads of AI economics, 2020–2026 (click to collapse)

What follows is not a list of papers β€” it is the intellectual map of the field. The 150 papers cluster into eight live debates and one settled question. For each, we name the key papers, summarize the debate, and say where it currently stands.

1. The productivity question: does AI raise output at market scale?

The single most-asked question in the field. The evidence pulls in opposite directions.

The task-RCT side says yes. Field experiments β€” Brynjolfsson-Li-Raymond on customer support (QJE 2025, 405 cites; NBER WP 2023, 795 cites), Dell'Acqua et al on consulting and on team work (Org Sci 2023; "Cybernetic Teammate" NBER WP 2025, 35 cites), Agarwal-Moehring-Rajpurkar-Salz on radiology (NBER WP 2023, 108 cites), Luo-Qin-Fang-Qu on sales coaching (J Marketing 2020, 302 cites) β€” find within-task productivity gains of 14–50 percent.

The macro side says no (yet). Brynjolfsson-Rock-Syverson's productivity-paradox framing (NBER WP 2017, 727 cites) defines the puzzle: AI capabilities are exploding while measured TFP is not. The leading reconciliation is the J-curve view (large reorganization investments delay productivity). The competing view is the bottleneck view: Aghion-Jones-Jones (NBER WP 2017, 408 cites) and Acemoglu's macro work argue aggregate gains are bounded by whichever complement remains scarce after AI cheapens one input.

Where it stands. Open. The task-RCT–macro gap is the central empirical puzzle. No paper has yet shown end-to-end production gains at market scale from a dated AI capability release.

2. Labor markets: displacement vs reinstatement

The longest-running thread, and the most empirically resolved for prior automation episodes.

Robots displaced labor. Acemoglu-Restrepo "Robots and Jobs" (JPE 2020, 3,344 cites) is the canonical estimate: one robot per 1,000 workers reduces the employment-to-population ratio by 0.2 percentage points. Replicated cross-country: Adachi-Kawaguchi-Saito on Japan (JLE 2022, 93 cites) β€” where adoption raised rather than lowered employment due to labor scarcity from aging; Acemoglu-Koster-Ozgen on the Netherlands (NBER WP 2023, 53 cites) β€” with worker-level 5-year wage losses of €4,000–5,000. Dixon-Hong-Wu in Management Science 2021 (372 cites) finds adopting French firms grow their labor, displacing non-adopters in the same industry.

New tasks reinstated labor. Autor "New Frontiers" (QJE 2024, 123 cites): 60 percent of US workers in 2018 work in occupations that did not exist in 1940. Acemoglu-Restrepo (Econometrica 2022, 494 cites) decomposes 50–80 percent of post-1980 wage inequality into displacement from middle-skill tasks. Acemoglu-Restrepo "Demographics and Automation" (REStud 2022, 516 cites): aging causes automation, not the reverse.

AI specifically is ambiguous. Acemoglu-Autor-Hazell-Restrepo "AI and Jobs" (JLE 2022, 835 cites): AI-exposed establishments substitute toward less AI-substitutable workers, but show no aggregate establishment-level employment effect yet. Exposure measures: Felten-Raj-Seamans AIOE (SMJ 2021, 371 cites). Generative AI: Hui-Reshef-Zhou (Org Sci 2024, 123 cites) shows direct displacement of exposed freelancers β€” the strongest causal evidence of generative-AI labor displacement to date. Humlum-Vestergaard on Danish administrative data (NBER WP 2025, 25 cites) titles its Solow-Paradox-2.0 finding "Still Waters, Rapid Currents".

Where it stands. Robots displaced workers; generative AI has produced task-level gains and some freelance-level displacement, but aggregate employment effects remain undetected. Country variation is large.

3. Who's adopting AI?

Big firms adopt; small firms don't. McElheran et al on Census-linked firm data (NBER WP 2023, 38 cites): AI is concentrated in large, high-wage, high-productivity firms β€” adoption reinforces existing firm-level inequality rather than narrowing it.

Consumers adopted fast, anyway. Bick-Blandin-Deming (NBER WP 2024, 74 cites): 40 percent of US adults used generative AI by August 2024, 28 percent at work β€” the fastest technology adoption ever measured. Chatterji-Cunningham-Deming-Hitzig (NBER WP 2025, 90 cites) characterizes the distribution of ChatGPT use across occupations.

Markets priced it. Eisfeldt-Schubert-Zhang (NBER WP 2023, 116 cites): firms with higher generative-AI exposure saw significantly higher stock returns immediately post-ChatGPT.

Where it stands. Qualitatively settled (concentration is real, adoption is fast); quantitatively the magnitudes are debated.

4. Algorithmic management: how do workers interact with AI?

Primarily an information-systems and organization-studies thread, mostly outside econ Top-5 β€” but with growing influence on how the field models human–AI complementarity.

Algorithm aversion fades; over-reliance replaces it. Bauer-Zahn-Hinz on explainable AI (ISR 2023, 209 cites) and Turel-Kalhan (MISQ 2023, 107 cites): aversion has implicit-bias roots and weakens with repeated exposure β€” but adding AI explanations can flip workers from aversion to over-reliance.

Knowledge brokerage emerges. Waardenburg-Huysman-Sergeeva "In the Land of the Blind…" (Org Sci 2021, 137 cites): algorithms create new mediating roles for workers who can translate between AI outputs and domain expertise.

Critical-judgment domains resist. Lebovitz-Lifshitz-Levina (Org Sci 2022, 446 cites): radiologists deliberately do not fully engage with AI on critical diagnoses; engagement is a professional practice, not a deficiency.

Where it stands. Rich qualitative evidence on adoption dynamics; weak quantitative welfare estimates.

5. Algorithmic decisions: better or worse than humans?

ML beats human judges on prediction. Kleinberg-Lakkaraju-Leskovec-Ludwig-Mullainathan (QJE 2017, 747 cites): ML pretrial flight-risk predictions could reduce jail populations 41.8 percent without raising crime. Mullainathan-Obermeyer (AER P&P 2017, 159 cites) extends to medical decisions: physicians over-test on visible symptoms and under-test on hidden ones.

Algorithms can reduce, not amplify, discrimination β€” if designed right. Kleinberg-Ludwig-Mullainathan-Sunstein "Discrimination in the Age of Algorithms" (NBER WP 2019, 105 cites): algorithms force explicit objective specification, which exposes human bias hidden in implicit judgment.

But on protected attributes, ML can be unfair. Fuster-Goldsmith-Pinkham-Ramadorai-Walther "Predictably Unequal" (J Finance 2021, 419 cites): ML in mortgage lending narrows price gaps across racial groups on average β€” but creates new inequalities. Kallus-Mao-Zhou (Management Science 2021, 74 cites): methods for fairness assessment when protected-class membership is unobserved.

Algorithms can collude on prices. Klein "Autonomous Algorithmic Collusion" (RAND J Econ 2021, 221 cites): Q-learning algorithms learn to collude in sequential pricing without communication. Foundational antitrust concern.

Where it stands. ML can beat humans on prediction tasks; whether the resulting decisions improve welfare depends on what objective is encoded β€” which is a normative, not empirical, choice.

6. Sectoral applications: finance, marketing, healthcare, accounting

Each sector has its own ML literature, often unconnected to the others.

Finance is most developed. Gu-Kelly-Xiu "Empirical Asset Pricing via Machine Learning" (RFS 2020, 2,132 cites) is the landmark β€” neural networks, random forests, and gradient boosting substantially outperform linear factor models out-of-sample. Avramov-Cheng-Metzker (Management Science 2022, 204 cites): ML predictability survives standard economic restrictions. Cao-Jiang-Wang-Yang (JFE 2024, 102 cites): analysts who combine AI with human judgment outperform pure-AI and pure-human approaches β€” the cleanest "man + machine" finance result.

Marketing is consumer-psychology heavy. Puntoni-Reczek-Giesler-Botti (J Marketing 2020, 971 cites): consumers feel reduced sense of self when AI touches identity-relevant outcomes. Longoni-Cian "Word of Machine" (J Marketing 2020, 690 cites): preference for AI on utilitarian decisions, humans on hedonic ones. Tully-Longoni-Appel (J Marketing 2025, 90 cites): less AI-literate consumers are more receptive to AI β€” counter-intuitive.

Healthcare is mixed. Agarwal et al on radiology (NBER WP 2023, 108 cites): human + AI underperforms pure-AI because humans don't fully use AI signals. FΓΌgener-Grahl-Gupta-Ketter (ISR 2021, 369 cites): productive human–AI delegation requires understanding AI's capabilities. Sahni-Stein-Zemmel-Cutler (NBER WP 2023, 88 cites) estimates AI could reduce US healthcare spending 5–10 percent within 5 years.

Accounting is starting. Commerford et al on auditor algorithm-aversion (JAR 2021, 197 cites). Chen-Cho-Dou-Lev (JAR 2022, 153 cites): ML on detailed financial-statement items outperforms standard accounting indicators for earnings prediction.

Where it stands. Deep within-sector findings; weak cross-sector synthesis. Finance is the methodological leader.

7. AI as methodology: ML in econometrics, LLMs as research tools

A growing set of papers using AI methods to do economics, not study AI as a phenomenon.

ML for causal inference. Chernozhukov-Chetverikov-Demirer-Duflo-Hansen-Newey-Robins "Double/Debiased ML" (NBER WP 2017, 503 cites; AER P&P 2017, 356 cites): foundation paper for ML in causal estimation. Their generic heterogeneous-treatment-effects extension (NBER WP 2018, 176 cites) is now standard in RCT analysis. Farrell-Liang-Misra "Deep Neural Networks for Estimation and Inference" (Econometrica 2021, 293 cites): rigorous inference guarantees that bridge deep learning and traditional econometric theory.

ML for hypothesis generation. Ludwig-Mullainathan (QJE 2024, 66 cites): argues ML's most valuable economics use is hypothesis generation, not prediction or causal inference.

LLMs as research tools. Korinek "Generative AI for Economic Research" (JEL 2023, 227 cites) is the most-cited LLM-as-research-tool paper β€” sets methodological norms with templates and best practices. Horton-Filippas-Manning "Homo Silicus" (NBER WP 2023, 260 cites): use LLMs as silicon experimental participants. Hansen-McMahon-Prat (QJE 2018, 619 cites) was the early canonical use of NLP on substantive macro data (FOMC transcripts).

Where it stands. Rapidly settling into standard practice.

8. Policy, regulation, and welfare

The least empirically settled thread.

Optimal robot taxation. Costinot-Werning "Robots, Trade, and Luddism: A Sufficient Statistic Approach" (REStud 2023, 45 cites): a positive robot tax is welfare-improving under distributional concerns but not under purely efficiency-based ones.

Globalization and development. Korinek-Stiglitz (NBER WP 2021, 126 cites): AI may end the labor-cost-arbitrage development path that lifted East Asia out of poverty β€” if AI cheapens skilled labor in advanced economies, developing-country export-led industrialization becomes harder.

Harms and regulation. Acemoglu "Harms of AI" (NBER WP 2021, 126 cites): surveys negative externalities of AI deployment β€” labor displacement, market manipulation, surveillance, misinformation, political concentration. Acemoglu-Lensman "Regulating Transformative Technologies" (AER P&P 2024, 9 cites): argues for procedural rather than substantive regulation (oversight, transparency, reversibility β€” not fixed rules).

Where it stands. Theoretically rich, empirically thin. Policy papers cite each other more than they cite empirical evidence.

9. The settled question: AI is a general-purpose technology

Multiple papers from independent directions converge: Trajtenberg "AI as the next GPT" (NBER WP 2018, 193 cites), Goldfarb-Taska-Teodoridis "Could ML be a GPT?" (NBER WP 2022, 77 cites), Cockburn-Henderson-Stern "AI on Innovation" (NBER WP 2018, 423 cites). The three classical GPT tests β€” pervasiveness, continuing improvement, innovation complementarity β€” all pass.

Where it stands. Closest thing to consensus in the field.

Three open gaps the corpus does not yet close

  1. End-to-end production at market scale. No paper has shown a dated AI capability release causally raising realized output in a clean, observable market at high frequency. Task-RCTs and firm-adoption surveys don't bridge to product entry or industry-level production.
  2. Welfare. Most papers measure productivity, employment, or wages β€” not welfare. The leap from "AI does X to firms" to "AI affects consumer/producer surplus" requires assumptions the corpus does not test.
  3. Dynamics of binding-constraint migration. The task-based framework predicts that when AI cheapens one input, the binding constraint relocates. Where it relocates β€” to user attention, to merchant supply, to platform discovery, to trust β€” is empirically open.

These three gaps describe the frontier.

πŸ›οΈ The conversation by academic field (click to collapse)

The cross-cutting threads above describe the field's debates. This section describes what scholars in each academic field are actually working on, so you can see how each discipline frames the AI question and which papers are the conversation-leaders within it.

Economics β€” 35 papers

The Top-5 + NBER + field-econ corpus, dominated by the Acemoglu-Restrepo task framework. Five active sub-threads.

1. Robots and prior-automation labor displacement is the most empirically resolved sub-thread. Acemoglu-Restrepo "Robots and Jobs" (JPE 2020, 3,344 cites), "Tasks Automation Wage Inequality" (Econometrica 2022, 494 cites), and "Demographics and Automation" (REStud 2022, 516 cites) are the core. Cross-country: Adachi-Kawaguchi-Saito on Japan (JLE 2022, 93 cites) finds the opposite sign of US results β€” aging-driven labor scarcity makes Japanese robot adoption labor-augmenting.

2. AI-specific labor effects are more ambiguous. Acemoglu-Autor-Hazell-Restrepo (JLE 2022, 835 cites) finds establishment-level substitution toward less-AI-exposed workers but no aggregate effect. Autor's "New Frontiers" (QJE 2024, 123 cites) and "Work of the Past, Work of the Future" (NBER WP 2019, 112 cites) document new-task reinstatement.

3. Productivity and growth theory is where the deepest debate lives. Aghion-Jones-Jones (NBER WP 2017, 408 cites) on Baumol bottlenecks vs singularity; Brynjolfsson-Rock-Syverson (NBER WP 2017, 727 cites) on the J-curve productivity paradox; Brynjolfsson-Li-Raymond's customer-support RCT (QJE 2025, 405 cites + NBER WP 795 cites). The RCT-macro gap is the central empirical puzzle in econ AI.

4. Algorithmic decisions is shared with IS and law. Kleinberg-Lakkaraju-Leskovec-Ludwig-Mullainathan (QJE 2017, 747 cites) on pretrial; Kleinberg-Ludwig-Mullainathan-Sunstein "Discrimination in the Age of Algorithms" (NBER WP 2019, 105 cites); Mullainathan-Obermeyer (AER P&P 2017, 159 cites) on medical decisions.

5. Policy, regulation, and welfare is theoretically rich but empirically thin. Costinot-Werning robot tax (REStud 2023, 45 cites); Korinek-Stiglitz globalization (NBER WP 2021, 126 cites); Acemoglu "Harms of AI" (NBER WP 2021, 126 cites); Acemoglu-Lensman procedural regulation (AER P&P 2024, 9 cites).

Where econ stands. The labor question is mostly settled for robots, ambiguous for AI. The productivity question is the field's central open puzzle. Economics has the highest-cited papers but the smallest corpus β€” econ publishing is slow.

Information Systems β€” 34 papers

The largest non-econ field by paper count. IS scholars are the field's front-line empiricists on AI deployment inside organizations.

1. Algorithm aversion mechanics is the dominant thread. Turel-Kalhan (MISQ 2023, 107 cites): aversion has implicit-bias roots and fades with exposure. Lysyakov-Viswanathan "Threatened by AI" (ISR 2022, 78 cites): worker resistance to AI introduction in crowdsourcing. FΓΌgener-Grahl-Gupta-Ketter (ISR 2021, 369 cites): productive human-AI delegation requires understanding AI's capabilities β€” neither blanket trust nor blanket aversion is optimal.

2. Explainable AI and decision quality is the second core thread. Bauer-Zahn-Hinz (ISR 2023, 209 cites): explainable AI shifts workers from aversion to over-reliance. Susarla et al "Janus Effect of Generative AI" (ISR 2023, 235 cites).

3. Algorithmic management on platforms connects to OB/Org Sci. MΓΆhlmann-Zalmanson-Henfridsson (MISQ 2021, 386 cites): theoretical synthesis of how matching algorithms simultaneously allocate work and control behavior on gig platforms. Broek-Sergeeva-Huysman "Machine Meets Expert" (MISQ 2021, 284 cites): ethnographic study of building an AI hiring tool.

4. AI for organizational decisions. Berente-Gu-Recker-Santhanam "Managing Artificial Intelligence" (MISQ 2021, 375 cites): editorial framework for AI governance in organizations. Jussupow et al "Augmenting Medical Diagnosis" (ISR 2021, 365 cites): physicians' AI use patterns.

5. AI for specific tasks. Lou-Wu "AI on Drugs" (MISQ 2021, 151 cites): AI accelerates pharma R&D. Yang-Lau-Abbasi (ISR 2022, 98 cites): deep learning for personality measurement.

Where IS stands. The dominant question: how should AI be designed and deployed in organizations to be effective and not oppressive? "Humans + AI" optimization is the empirical core, with growing methodological sophistication.

Marketing β€” 20 papers

The largest body of consumer-psychology research on AI. Four sub-threads.

1. Consumer reactions to AI services. Puntoni-Reczek-Giesler-Botti "Consumers and AI: An Experiential Perspective" (J Marketing 2020, 971 cites) is the conceptual landmark β€” when AI touches identity-relevant outcomes, consumers feel reduced sense of self.

2. The word-of-machine effect. Longoni-Cian "Artificial Intelligence in Utilitarian vs. Hedonic Contexts" (J Marketing 2020, 690 cites): consumers prefer AI for utilitarian decisions, humans for hedonic ones. Longoni-Cian-Kyung "Algorithmic Transference" (JMR 2022, 99 cites): consumers overgeneralize failures of one AI to all AIs.

3. AI literacy and adoption. Tully-Longoni-Appel "Lower AI Literacy Predicts Greater AI Receptivity" (J Marketing 2025, 90 cites): counter-intuitive β€” less-informed consumers are more open to AI, reframing adoption mental models.

4. AI as a marketing research tool. Li-Castelo-Katona-SΓ‘rvΓ‘ry "LLM for Perceptual Analysis" (Mkt Sci 2024, 129 cites): validates LLMs as a tool for brand-attribute analysis. Burnap-Hauser-Timoshenko "ML for Product Aesthetic Design" (Mkt Sci 2023, 79 cites). Bell-Pescher-Tellis-FΓΌller "AI for Idea Screening in Crowdsourcing" (Mkt Sci 2023, 73 cites).

5. AI in sales. Luo-Qin-Fang-Qu "AI Coaches for Sales Agents" (J Marketing 2020, 302 cites): RCT β€” AI coaching raises sales performance when suggestions align with employee expectations.

Where marketing stands. The central debate: when do consumers accept AI vs prefer humans, and what design choices smooth adoption? Consumer-psych identifies persistent boundary conditions (hedonic, identity, transparency) under which AI is rejected.

Strategy β€” 11 papers

SMJ has built a "ML for management research" + "AI as strategic resource" thread. Four sub-threads.

1. AI exposure for strategic positioning. Felten-Raj-Seamans "AIOE" (SMJ 2021, 371 cites): the most-cited AI occupational-exposure measure used in strategy research. The dataset itself is the contribution.

2. ML for empirical strategy research. Choudhury-Allen-Endres "ML for Pattern Discovery in Management Research" (SMJ 2020, 219 cites): methodological reference for using ML in management empirics. Miric-Jia-Huang "Supervised ML for Large-scale Classification" (SMJ 2022, 190 cites): replicable pipeline for classifying AI patents.

3. AI as a strategic decision tool. Doshi-Bell-Mirzayev-Vanneste "Generative AI and Evaluating Strategic Decisions" (SMJ 2024, 125 cites): RCT showing AI assistance helps managers make better strategic decisions, particularly in unfamiliar contexts. Gaessler-Piezunka "Training with AI: Chess" (SMJ 2023, 78 cites): AI-augmented training raises skill but with diminishing returns at high levels.

4. AI as a competitive force. Tong-Jia-Luo-Fang "Janus Face of AI Feedback" (SMJ 2021, 450 cites): AI feedback changes outcomes differently based on deployment vs disclosure. Choudhury-Starr-Agarwal "ML and Human Capital Complementarities" (SMJ 2020, 244 cites): ML+human teams outperform either alone when bias is actively mitigated.

Where strategy stands. The dominant question: when does AI augment vs substitute managerial judgment, and how does AI shift the boundaries of efficient firm size? Methodological self-reflection (ML for management research) is itself a major contribution.

Organizational Behavior β€” 11 papers

Mostly Organization Science. The richest qualitative and field-experimental evidence on how workers and professionals interact with AI. Five sub-threads.

1. Critical-judgment domains resist AI. Lebovitz-Lifshitz-Levina "To Engage or Not to Engage with AI for Critical Judgments" (Org Sci 2022, 446 cites): radiologists deliberately under-engage AI on consequential diagnoses β€” engagement is a deliberate professional practice, not algorithm aversion in the cognitive-bias sense.

2. Knowledge brokerage emerges from AI. Waardenburg-Huysman-Sergeeva "In the Land of the Blind, the One-Eyed Man Is King" (Org Sci 2021, 137 cites): algorithms create new mediating roles for workers who can translate between AI outputs and domain expertise.

3. Algorithm-augmented work has countervailing forces. Allen-Choudhury "Algorithm-Augmented Work and Domain Experience" (Org Sci 2021, 156 cites): ML help boosts productivity for low-skill workers more than high-skill, but high-skill workers are more averse to ML use. Pachidi-Berends-Faraj-Huysman "Make Way for the Algorithms" (Org Sci 2020, 165 cites): ethnographic study of symbolic disruption.

4. AI and group dynamics. Boussioux-Lane-Zhang-Jacimovic "The Crowdless Future? Generative AI and Creative Problem-Solving" (Org Sci 2024, 204 cites): generative AI replaces traditional crowdsourcing for many idea-generation tasks but at a cost in novelty diversity.

5. AI in research production. Furman-Teodoridis "Automation, Research Technology, and Researchers' Trajectories" (Org Sci 2020, 87 cites): how research-automation tools shape the careers of computer science researchers. Shrestha-He-Puranam-Krogh "Algorithm Supported Induction for Building Theory" (Org Sci 2020, 146 cites): methodological framework for using ML in theory-building.

Where OB stands. The dominant question: when does AI augment expertise vs replace it? When does aversion fade? OB has rich qualitative depth that economics lacks β€” but weaker external-validity claims.

Finance β€” 10 papers

Finance has built its own AI literature parallel to econ. Four sub-threads.

1. ML for asset pricing. Gu-Kelly-Xiu "Empirical Asset Pricing via Machine Learning" (RFS 2020, 2,132 cites) is the methodological landmark β€” neural networks, random forests, and gradient boosting substantially outperform linear factor models out-of-sample. Avramov-Cheng-Metzker (Mgt Sci 2022, 204 cites): ML predictability survives standard economic restrictions. Wu-Chen-Yang-Tindall (Mgt Sci 2020, 108 cites): ML for hedge-fund return prediction. Obaid-Pukthuanthong (JFE 2021, 194 cites): ML on financial images as a sentiment signal beyond text.

2. AI in lending and credit. Fuster-Goldsmith-Pinkham-Ramadorai-Walther "Predictably Unequal" (J Finance 2021, 419 cites): ML in mortgage lending narrows price gaps across racial groups on average β€” but creates new inequalities. The central fairness debate in finance.

3. AI in corporate governance. Erel-Stern-Tan-Weisbach "Selecting Directors Using Machine Learning" (RFS 2021, 200 cites): ML-recommended corporate directors outperform actually-chosen directors on board responsibilities.

4. Generative AI and firm values / analyst behavior. Eisfeldt-Schubert-Zhang "Generative AI and Firm Values" (NBER WP 2023, 116 cites): event-study showing firms with higher generative-AI exposure see higher stock returns post-ChatGPT. Cao-Jiang-Wang-Yang "Man + Machine in Stock Analysis" (JFE 2024, 102 cites): analysts who combine AI with human judgment outperform pure-AI and pure-human β€” the cleanest finance "complementarity" result.

Where finance stands. ML beats traditional models on prediction; the recurring debate is whether ML uncovers economic mechanisms or just statistical regularities. Avramov's "ML vs Economic Restrictions" is the methodological clarification: ML still wins after standard controls.

Operations β€” 10 papers

A more applied thread focused on production, supply chain, and process quality. Three sub-threads.

1. AI in manufacturing process quality. Senoner-Netland-Feuerriegel "Explainable AI to Improve Process Quality" (Mgt Sci 2021, 265 cites): RCT in semiconductor fab shows explainable AI raises yield more than unexplained AI β€” establishing explainability as critical for industrial adoption.

2. AI in supply chain and procurement. Cui-Li-Zhang "AI and Procurement" (M&SOM 2021, 144 cites): AI in procurement reduces supplier failures and raises efficiency, with effects strongest when human override is allowed. Mithas-Chen-Saldanha-Silveira "How AI and Industry 4.0 Transform Operations" (POMS 2022, 256 cites): survey/conceptual paper on the scope of AI-driven operational change.

3. Robot scheduling and warehouse operations. Wang-Sheu-Teo-Xue "Robot Scheduling for Mobile-Rack Warehouses" (POMS 2021, 98 cites): algorithmic design for human-robot coordinated order picking.

Where operations stands. The dominant question: when does AI/ML add value relative to traditional operations-research optimization? Answer emerging: when there is substantial unobserved heterogeneity or noise that ML can capture, AND when human override is preserved.

Accounting β€” 8 papers

The smallest but rapidly emerging thread. Three sub-threads.

1. Algorithm aversion in auditing. Commerford-Dennis-Joe-Ulla "Man Versus Machine: Auditor Reliance on AI" (JAR 2021, 197 cites): auditors rely less on AI estimates than on human estimates even when AI is more accurate β€” direct evidence of algorithm aversion in a high-stakes professional setting.

2. ML for earnings and financial prediction. Chen-Cho-Dou-Lev "Predicting Future Earnings Changes Using Machine Learning" (JAR 2022, 153 cites): ML on detailed financial-statement items beats standard accounting indicators for earnings prediction.

3. Machine + man in lending. Costello-Down-Mehta "Machine + Man" (JAE 2020, 92 cites): field experiment β€” loan officers using AI scores with discretion outperform pure-AI in informal-economy lending markets. A close accounting analog to the finance "man + machine" result.

4. LLMs in accounting research methodology. Kok "ChatGPT for Textual Analysis" (Mgt Sci 2025, 77 cites): methodology guide for using generative LLMs in accounting research.

Where accounting stands. The central debate: when does professional judgment beat algorithms in accounting decisions, and how can AI augment audit quality? The accounting field is younger in this conversation than finance but is converging on similar "man + machine" complementarity findings.

General Management β€” 11 papers

Management Science papers that don't clearly fit a single sub-discipline. Mix of AI-in-healthcare (Wang-Gao-Agarwal "Friend or Foe Teaming with AI", Ibrahim-Kim-Tong "Eliciting Human Judgment"), AI fairness (Kallus-Mao-Zhou unobserved-protected-class), and AI-as-GPT methodological work (Goldfarb-Taska-Teodoridis "Could ML be a GPT?"). These cross-cut the field boundaries β€” they cite economists, finance, OB, and operations alike.

🧡 Deep dive by topic β€” abstract + summary table for each cluster (click to expand)

For each topic cluster: a 300–500 word synthesis covering all papers in the cluster, followed by a sortable summary table. Topics ordered by total citation weight.

Filter by academic field

Filter by topic

Filter by venue category

53 papers Click rows to view source

πŸ“‹ Methodology

Source filter: 33 venues β€” Top-5 economics (AER, AER P&P, AER Insights, QJE, JPE, Econometrica, REStud); top business journals (Management Science, Marketing Science, Strategic Mgmt J, Organization Science); finance (J Finance, RFS, JFE); information systems (MIS Quarterly, Info Systems Research); accounting (J Acctg & Econ, J Acctg Research, Acctg Review); marketing (J Marketing Research, J Marketing); operations (POMS, M&SOM); top-field econ (J Labor Econ, AEJ Applied/Macro/Micro/Policy, JEP, JEL, RAND, J Public Econ, REStat); NBER Working Paper series.

Topic filter: AI, ML, automation, robotics, generative AI, LLMs, or algorithmic decisions as the primary research focus (not merely an econometric tool, unless the methodological contribution itself is AI-related). Time window: 2020–2026, with three manually-added classics from 2019/2020/2022 (Acemoglu-Restrepo "Robots and Jobs" JPE 2020, Acemoglu-Autor-Hazell-Restrepo "AI and Jobs Online Vacancies" JLE 2022, Acemoglu-Restrepo "Tasks, Automation, and the Rise in U.S. Wage Inequality" Econometrica 2022).

Citation source: OpenAlex citation counts as of May 2026 β€” a proxy for Google Scholar. OpenAlex tracks each NBER WP and its journal-published version as separate works; this list shows the higher-cited version with a note. Caveats: (i) Highly influential AI papers in venues outside this 33-source filter are excluded β€” e.g., Eloundou-Manning-Mishkin-Rock (arXiv), Noy-Zhang (Science), Peng-Kalliamvakou-Cihon-Demirer (arXiv), Brynjolfsson-Rock-Syverson (NBER + book chapter), Dell'Acqua "Jagged Frontier" (HBS WP). (ii) The 87 papers in the long tail use auto-generated summaries; the top 63 have hand-curated 2-3 sentence summaries.