-
Notifications
You must be signed in to change notification settings - Fork 6
Expand file tree
/
Copy pathindex.html
More file actions
1 lines (1 loc) · 31.6 KB
/
index.html
File metadata and controls
1 lines (1 loc) · 31.6 KB
1
<!DOCTYPE html> <html lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <title> ChangeLing Lab </title> <meta name="author" content="ChangeLing Lab"> <meta name="description" content="The CMU Language Change and Empirical Linguistics Lab "> <meta name="keywords" content="linguistics, computational-linguistics, natural-language-processing, diachronic, phonology, morphology, sociolinguistics"> <link rel="stylesheet" href="/assets/css/bootstrap.min.css?a4b3f509e79c54a512b890d73235ef04"> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/mdbootstrap@4.20.0/css/mdb.min.css" integrity="sha256-jpjYvU3G3N6nrrBwXJoVEYI/0zw8htfFnhT9ljN3JJw=" crossorigin="anonymous"> <link defer rel="stylesheet" href="/assets/css/academicons.min.css?f0b7046b84e425c55f3463ac249818f5"> <link defer rel="stylesheet" type="text/css" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700|Roboto+Slab:100,300,400,500,700|Material+Icons&display=swap"> <link defer rel="stylesheet" href="/assets/css/jekyll-pygments-themes-github.css?591dab5a4e56573bf4ef7fd332894c99" media="" id="highlight_theme_light"> <link rel="shortcut icon" href="data:image/svg+xml,<svg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20100%20100%22><text%20y=%22.9em%22%20font-size=%2290%22>%CA%A7</text></svg>"> <link rel="stylesheet" href="/assets/css/main.css?d41d8cd98f00b204e9800998ecf8427e"> <link rel="canonical" href="https://changelinglab.github.io/"> <script src="/assets/js/theme.js?9a0c749ec5240d9cda97bc72359a72c0"></script> <link defer rel="stylesheet" href="/assets/css/jekyll-pygments-themes-native.css?5847e5ed4a4568527aa6cfab446049ca" media="none" id="highlight_theme_dark"> <script>initTheme();</script> </head> <body class="fixed-top-nav "> <header> <nav id="navbar" class="navbar navbar-light navbar-expand-sm fixed-top" role="navigation"> <div class="container"> <button class="navbar-toggler collapsed ml-auto" type="button" data-toggle="collapse" data-target="#navbarNav" aria-controls="navbarNav" aria-expanded="false" aria-label="Toggle navigation"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar top-bar"></span> <span class="icon-bar middle-bar"></span> <span class="icon-bar bottom-bar"></span> </button> <div class="collapse navbar-collapse text-right" id="navbarNav"> <ul class="navbar-nav ml-auto flex-nowrap"> <li class="nav-item active"> <a class="nav-link" href="/">about <span class="sr-only">(current)</span> </a> </li> <li class="nav-item "> <a class="nav-link" href="/blog/">blog </a> </li> <li class="nav-item "> <a class="nav-link" href="/publications/">publications </a> </li> <li class="nav-item "> <a class="nav-link" href="/projects/">projects </a> </li> <li class="nav-item "> <a class="nav-link" href="/people/">people </a> </li> <li class="nav-item"> <button id="search-toggle" title="Search" onclick="openSearchModal()"> <span class="nav-link">ctrl k <i class="ti ti-search"></i></span> </button> </li> <li class="toggle-container"> <button id="light-toggle" title="Change theme"> <i class="ti ti-sun-moon" id="light-toggle-system"></i> <i class="ti ti-moon-filled" id="light-toggle-dark"></i> <i class="ti ti-sun-filled" id="light-toggle-light"></i> </button> </li> </ul> </div> </div> </nav> <progress id="progress" value="0"> <div class="progress-container"> <span class="progress-bar"></span> </div> </progress> </header> <div class="container mt-5" role="main"> <div class="post"> <header class="post-header"> <h1 class="post-title"> <span class="font-weight-bold">ChangeLing</span> Lab </h1> <p class="desc">Language Change and Empirical Linguistics at CMU</p> </header> <article> <div class="profile float-right"> <figure> <picture> <source class="responsive-img-srcset" srcset="/assets/img/changeling-480.webp 480w,/assets/img/changeling-800.webp 800w,/assets/img/changeling-1400.webp 1400w," sizes="(min-width: 930px) 270.0px, (min-width: 576px) 30vw, 95vw" type="image/webp"> <img src="/assets/img/changeling.jpg?cbbf6027da1d49a5f66f9a1079d94df1" class="img-fluid z-depth-1 rounded" width="100%" height="auto" alt="changeling.jpg" loading="eager" onerror="this.onerror=null; $('.responsive-img-srcset').remove();"> </source></picture> </figure> <div class="more-info"> <p>5407 Gates Hillman Complex</p> <p>Language Technologies Institute</p> <p>Carnegie Mellon University</p> </div> </div> <div class="clearfix"> <p>ChangeLing Lab is Carnegie Mellon University’s only research lab focused on understanding how languages change, and how these patterns of change shape the way that languages are at any given point in time, <strong>from a computational perspective</strong>. We are interested in phonetics, phonology, and morphology (whether diachronic or synchronic), emergent communication, and have a special concern for the use of language science to benefit people with disabilities.</p> <p>ChangeLing is lead by <a href="https://www.cs.cmu.edu/~dmortens/" rel="external nofollow noopener" target="_blank">David R. Mortensen</a>, an Assistant Research Professor in the Language Technologies Institute. It consists, additionally, of graduate students, former LTI students who still collaborate with David, and visitors.</p> <p>If you are interested in joining ChangeLing, please email David at <a href="mailto:dmortens@cs.cmu.edu">dmortens@cs.cmu.edu</a> with a CV and a description of what work you would like to do with us. Please take the following into account:</p> <ul> <li>We are only concerned with work that has some linguistic angle (either it uses linguistics or it is useful for linguists). Students who are concerned with machine learning for its own sake would be better served by another lab.</li> <li>We are interested in large language models, but only with respect to their language and linguistic reasoning capabilities. Our lab is not a good place to do general, engineering-focused or fundamental research on LLMs.</li> </ul> </div> <h2> <a href="/news/" style="color: inherit">news</a> </h2> <div class="news"> <div class="table-responsive" style="max-height: 60vw"> <table class="table table-sm table-borderless"> <tr> <th scope="row" style="width: 20%">Apr 07, 2026</th> <td> <ul> <li>POWSM: A Phonetic Open Whisper-Style Speech Foundation Model (main, top 5%)</li> <li>PRiSM: Benchmarking Phone Realization in Speech Models (main)</li> <li>Communicating in Emergent Language with an Induced Morphological Phrasebook (main)</li> <li>[b] = [d] - [t] + [p]: Self-supervised Speech Models Discover Phonological Vector Arithmetic (findings)</li> <li>Linear Script Representations in Speech Foundation Models Enable Zero-Shot Transliteration (findings)</li> <li>PBEBench: A Multi-Step Programming by Examples Reasoning Benchmark inspired by Historical Linguistics (findings) were accepted to ACL 2026.</li> </ul> </td> </tr> <tr> <th scope="row" style="width: 20%">Oct 12, 2025</th> <td> David will give the Colloquium talk at LTI, <a href="https://www.lti.cs.cmu.edu/misc-pages/david-mortensen-poster.png" rel="external nofollow noopener" target="_blank">“The Reconstruction Will Not Be Supervised.”</a> </td> </tr> <tr> <th scope="row" style="width: 20%">Sep 20, 2025</th> <td> Changeling Lab member Brendon Boldt will present two papers in the main session of EMNLP 2025 (in Suzhou): “Morpheme Induction for Emergent Language” and “Searching for the Most Human-like Emergent Language.” </td> </tr> <tr> <th scope="row" style="width: 20%">Sep 01, 2025</th> <td> David Mortensen, Shinji Watanabe, and Jonathan Amith have received an NSF grant to <a href="/projects/10_systematic/">leverage systematic patterns among related languages and dialects to improve ASR for low-resource varieties</a>. </td> </tr> <tr> <th scope="row" style="width: 20%">Aug 17, 2025</th> <td> Chin-jou, Eunjung, Kwanghee, and David’s paper “Towards Inclusive ASR: Investigating Voice Conversion for Dysarthric Speech Recognition in Low-Resource Languages” was accepted to Interspeech 2025. </td> </tr> <tr> <th scope="row" style="width: 20%">Aug 08, 2025</th> <td> Atharva Naik, Kexun Zhang, Nate, Aravind Mysore, Clayton Marr, Hong Sng, Rebecca Byrnes, Anna, Kalvin, and David released a pre-print “Can Large Language Models Code Like a Linguist?: A Case Study in Low Resource Sound Law Induction.” </td> </tr> <tr> <th scope="row" style="width: 20%">Jul 28, 2025</th> <td> Eunjung and David’s journal paper “Applications of Artificial Intelligence for Cross-language Intelligibility Assessment of Dysarthric Speech” was accepted to <em>Perspectives of the ASHA SIG 19.</em> </td> </tr> <tr> <th scope="row" style="width: 20%">Jul 01, 2025</th> <td> <ul> <li>“Programming by Example meets Historical Linguistics: A Large Language Model Based Approach to Sound Law Induction”</li> <li>“DialUp! Modeling the Language Continuum by Adapting Models to Dialects and Dialects to Models” (Nate and David’s collaboration with Niyati Bafna, Emily Chang, Kenton Murray, David Yarowsky, and Hale Sirin)</li> <li>“ZIPA: A family of efficient models for multilingual phone recognition” (collaboration with Jian Zhu, Farhan Samir, Eleanor Chodroff) were accepted to ACL 2025 (main).</li> </ul> </td> </tr> <tr> <th scope="row" style="width: 20%">Apr 29, 2025</th> <td> Kwanghee, Eunjung, Kalvin, and David’s paper “Leveraging Allophony in Self-Supervised Speech Models for Atypical Pronunciation Assessment” was accepted to NAACL 2025 (main). </td> </tr> <tr> <th scope="row" style="width: 20%">Apr 15, 2025</th> <td> David gave a keynote on computational historical linguistics at Midwest Speech and Language Days 2025. </td> </tr> <tr> <th scope="row" style="width: 20%">Feb 18, 2025</th> <td> “Derivational morphology reveals analogical generalization in large language models” (Leonie and David’s collaboration with Valentin Hofmann, Hinrich Schütze, and Janet B. Pierrehumbert) was accepted to <em>Proceedings of the National Academy of Sciences</em>. </td> </tr> <tr> <th scope="row" style="width: 20%">Jan 17, 2025</th> <td> David Mortensen will give a talk as part of the University of Pittsburgh colloquium series. </td> </tr> <tr> <th scope="row" style="width: 20%">Nov 01, 2024</th> <td> Our paper “Zero-Shot Cross-Lingual NER Using Phonemic Representations for Low-Resource Languages” was accepted to EMNLP 2024 (main) and “Mitigating the Linguistic Gap with Phonemic Representations for Robust Cross-lingual Transfer” was accepted to MRL 2024. </td> </tr> <tr> <th scope="row" style="width: 20%">Sep 10, 2024</th> <td> Congratulations to Kalvin Chang and David Mortensen for winning an Honorable Mention, Best Paper Award at the Interspeech 2024 Responsible Speech Foundation Models Special Session with their paper “Self-supervised Speech Representations Still Struggle with African American Vernacular English” (joint work with Yi-Hui Chou, Hsuan-Ming Chen, Jiatong Shi, Nicole Holliday, and Odette Scharenborg). </td> </tr> <tr> <th scope="row" style="width: 20%">Sep 06, 2024</th> <td> David Mortensen gave a CLSP Seminar at Johns Hopkins University. </td> </tr> <tr> <th scope="row" style="width: 20%">Aug 14, 2024</th> <td> Congratulations to Liang (Leon) Lu for winning the ACL2024 Best Paper Award (Non-Publicized) with his paper “Semisupervised Neural Proto-language Reconstruction” (joint work with Peirong Xie and David Mortensen). </td> </tr> <tr> <th scope="row" style="width: 20%">Aug 14, 2024</th> <td> <ul> <li>Kwanghee, Nate, and David’s paper “Wav2Gloss: Generating Interlinear Glossed Text from Speech” (collaboration with Taiqi He, Lindia Tjuatja, Jiatong Shi, Shinji Watanabe, Graham Neubig, Lori Levin)</li> <li>Leon and David’s paper “Semisupervised Neural Proto-language Reconstruction” (collaboration with Peirong Xie) were accepted to ACL 2024.</li> </ul> </td> </tr> <tr> <th scope="row" style="width: 20%">Aug 07, 2024</th> <td> <img class="emoji" title=":sparkles:" alt=":sparkles:" src="https://github.githubassets.com/images/icons/emoji/unicode/2728.png" height="20" width="20"> <strong>ChangeLing Lab is officially born!</strong> <img class="emoji" title=":sparkles:" alt=":sparkles:" src="https://github.githubassets.com/images/icons/emoji/unicode/2728.png" height="20" width="20"> (though it has long existed in fact). </td> </tr> </table> </div> </div> <h2> <a href="/blog/" style="color: inherit">latest posts</a> </h2> <div class="news"> <div class="table-responsive" style="max-height: 60vw"> <table class="table table-sm table-borderless"> <tr> <th scope="row" style="width: 20%">Sep 05, 2024</th> <td> <a class="news-title" href="/blog/2024/hl_and_information/">Information and Comparative Reconstruction</a> </td> </tr> <tr> <th scope="row" style="width: 20%">Aug 15, 2024</th> <td> <a class="news-title" href="/blog/2024/best_paper/">On our ACL Best Paper Award Paper</a> </td> </tr> <tr> <th scope="row" style="width: 20%">Aug 14, 2024</th> <td> <a class="news-title" href="/blog/2024/cl_in_acl/">Is ACL an AI (or NLP or CL) Conference?</a> </td> </tr> </table> </div> </div> <h2> <a href="/publications/" style="color: inherit">selected publications</a> </h2> <div class="publications"> <ol class="bibliography"></ol> </div> <div class="social"> <div class="contact-icons"> <a href="mailto:%64%6D%6F%72%74%65%6E%73@%63%73.%63%6D%75.%65%64%75" title="email"><i class="fa-solid fa-envelope"></i></a> <a href="https://orcid.org/0000-0002-3927-6851" title="ORCID" rel="external nofollow noopener" target="_blank"><i class="ai ai-orcid"></i></a> <a href="https://scholar.google.com/citations?user=2iS5aeoAAAAJ&hl" title="Google Scholar" rel="external nofollow noopener" target="_blank"><i class="ai ai-google-scholar"></i></a> <a href="https://www.semanticscholar.org/author/3407646" title="Semantic Scholar" rel="external nofollow noopener" target="_blank"><i class="ai ai-semantic-scholar"></i></a> <a href="https://github.com/changelinglab" title="GitHub" rel="external nofollow noopener" target="_blank"><i class="fa-brands fa-github"></i></a> <a href="https://twitter.com/dmort27" title="X" rel="external nofollow noopener" target="_blank"><i class="fa-brands fa-x-twitter"></i></a> <a href="/feed.xml" title="RSS Feed"><i class="fa-solid fa-square-rss"></i></a> </div> <div class="contact-note">I'm best reached by email or, if you are in the inner circle, by Slack. </div> </div> </article> </div> </div> <footer class="fixed-bottom" role="contentinfo"> <div class="container mt-0"> © Copyright 2026 ChangeLing Lab. Powered by <a href="https://jekyllrb.com/" target="_blank" rel="external nofollow noopener">Jekyll</a> with <a href="https://github.com/alshedivat/al-folio" rel="external nofollow noopener" target="_blank">al-folio</a> theme. Hosted by <a href="https://pages.github.com/" target="_blank" rel="external nofollow noopener">GitHub Pages</a>. Photos from <a href="https://unsplash.com" target="_blank" rel="external nofollow noopener">Unsplash</a>. </div> </footer> <script src="https://cdn.jsdelivr.net/npm/jquery@3.6.0/dist/jquery.min.js" integrity="sha256-/xUj+3OJU5yExlq6GSYGSHk7tPXikynS7ogEvDej/m4=" crossorigin="anonymous"></script> <script src="/assets/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.jsdelivr.net/npm/mdbootstrap@4.20.0/js/mdb.min.js" integrity="sha256-NdbiivsvWt7VYCt6hYNT3h/th9vSTL4EDWeGs5SN3DA=" crossorigin="anonymous"></script> <script defer src="https://cdn.jsdelivr.net/npm/masonry-layout@4.2.2/dist/masonry.pkgd.min.js" integrity="sha256-Nn1q/fx0H7SNLZMQ5Hw5JLaTRZp0yILA/FRexe19VdI=" crossorigin="anonymous"></script> <script defer src="https://cdn.jsdelivr.net/npm/imagesloaded@5.0.0/imagesloaded.pkgd.min.js" integrity="sha256-htrLFfZJ6v5udOG+3kNLINIKh2gvoKqwEhHYfTTMICc=" crossorigin="anonymous"></script> <script defer src="/assets/js/masonry.js" type="text/javascript"></script> <script defer src="https://cdn.jsdelivr.net/npm/medium-zoom@1.1.0/dist/medium-zoom.min.js" integrity="sha256-ZgMyDAIYDYGxbcpJcfUnYwNevG/xi9OHKaR/8GK+jWc=" crossorigin="anonymous"></script> <script defer src="/assets/js/zoom.js?85ddb88934d28b74e78031fd54cf8308"></script> <script src="/assets/js/no_defer.js?2781658a0a2b13ed609542042a859126"></script> <script defer src="/assets/js/common.js?e0514a05c5c95ac1a93a8dfd5249b92e"></script> <script defer src="/assets/js/copy_code.js?12775fdf7f95e901d7119054556e495f" type="text/javascript"></script> <script defer src="/assets/js/jupyter_new_tab.js?d9f17b6adc2311cbabd747f4538bb15f"></script> <script async src="https://d1bxh8uas1mnw7.cloudfront.net/assets/embed.js"></script> <script async src="https://badge.dimensions.ai/badge.js"></script> <script type="text/javascript">window.MathJax={tex:{tags:"ams"}};</script> <script defer type="text/javascript" id="MathJax-script" src="https://cdn.jsdelivr.net/npm/mathjax@3.2.2/es5/tex-mml-chtml.js" integrity="sha256-MASABpB4tYktI2Oitl4t+78w/lyA+D7b/s9GEP0JOGI=" crossorigin="anonymous"></script> <script defer src="https://cdnjs.cloudflare.com/polyfill/v3/polyfill.min.js?features=es6" crossorigin="anonymous"></script> <script type="text/javascript">function progressBarSetup(){"max"in document.createElement("progress")?(initializeProgressElement(),$(document).on("scroll",function(){progressBar.attr({value:getCurrentScrollPosition()})}),$(window).on("resize",initializeProgressElement)):(resizeProgressBar(),$(document).on("scroll",resizeProgressBar),$(window).on("resize",resizeProgressBar))}function getCurrentScrollPosition(){return $(window).scrollTop()}function initializeProgressElement(){let e=$("#navbar").outerHeight(!0);$("body").css({"padding-top":e}),$("progress-container").css({"padding-top":e}),progressBar.css({top:e}),progressBar.attr({max:getDistanceToScroll(),value:getCurrentScrollPosition()})}function getDistanceToScroll(){return $(document).height()-$(window).height()}function resizeProgressBar(){progressBar.css({width:getWidthPercentage()+"%"})}function getWidthPercentage(){return getCurrentScrollPosition()/getDistanceToScroll()*100}const progressBar=$("#progress");window.onload=function(){setTimeout(progressBarSetup,50)};</script> <script src="/assets/js/vanilla-back-to-top.min.js?f40d453793ff4f64e238e420181a1d17"></script> <script>addBackToTop();</script> <script type="module" src="/assets/js/search/ninja-keys.min.js?601a2d3465e2a52bec38b600518d5f70"></script> <ninja-keys hidebreadcrumbs noautoloadmdicons placeholder="Type to start searching"></ninja-keys> <script>let searchTheme=determineComputedTheme();const ninjaKeys=document.querySelector("ninja-keys");"dark"===searchTheme?ninjaKeys.classList.add("dark"):ninjaKeys.classList.remove("dark");const openSearchModal=()=>{const e=$("#navbarNav");e.hasClass("show")&&e.collapse("hide"),ninjaKeys.open()};</script> <script>const ninja=document.querySelector("ninja-keys");ninja.data=[{id:"nav-about",title:"about",section:"Navigation",handler:()=>{window.location.href="/"}},{id:"nav-blog",title:"blog",description:"",section:"Navigation",handler:()=>{window.location.href="/blog/"}},{id:"nav-publications",title:"publications",description:"David Mortensen's publications including all publications with other members of ChangeLing Lab.",section:"Navigation",handler:()=>{window.location.href="/publications/"}},{id:"nav-projects",title:"projects",description:"Ongoing ChangeLing Lab Projects",section:"Navigation",handler:()=>{window.location.href="/projects/"}},{id:"nav-people",title:"people",description:"Members and retroactive alumni of ChangLing Lab",section:"Navigation",handler:()=>{window.location.href="/people/"}},{id:"post-information-and-comparative-reconstruction",title:"Information and Comparative Reconstruction",description:"Informal information-theoretic framing of the comparative method in historical linguistics",section:"Posts",handler:()=>{window.location.href="/blog/2024/hl_and_information/"}},{id:"post-on-our-acl-best-paper-award-paper",title:"On our ACL Best Paper Award Paper",description:"a short philosophical discursion",section:"Posts",handler:()=>{window.location.href="/blog/2024/best_paper/"}},{id:"post-is-acl-an-ai-or-nlp-or-cl-conference",title:"Is ACL an AI (or NLP or CL) Conference?",description:"Ruminations on Emily Bender's Presidential Address at ACL2024",section:"Posts",handler:()=>{window.location.href="/blog/2024/cl_in_acl/"}},{id:"post-why-does-diachronic-linguistics-matter",title:"Why does diachronic linguistics matter?",description:"a short philosophical discursion",section:"Posts",handler:()=>{window.location.href="/blog/2024/why-diachronic-linguistics/"}},{id:"news-sparkles-changeling-lab-is-officially-born-sparkles-though-it-has-long-existed-in-fact",title:'<img class="emoji" title=":sparkles:" alt=":sparkles:" src="https://github.githubassets.com/images/icons/emoji/unicode/2728.png" height="20" width="20"> ChangeLing Lab is officially born! <img class="emoji" title=":sparkles:" alt=":sparkles:" src="https://github.githubassets.com/images/icons/emoji/unicode/2728.png" height="20" width="20"> (though it has long existed in...',description:"",section:"News"},{id:"news-kwanghee-nate-and-david-s-paper-wav2gloss-generating-interlinear-glossed-text-from-speech-collaboration-with-taiqi-he-lindia-tjuatja-jiatong-shi-shinji-watanabe-graham-neubig-lori-levin-leon-and-david-s-paper-semisupervised-neural-proto-language-reconstruction-collaboration-with-peirong-xie-were-accepted-to-acl-2024",title:"Kwanghee, Nate, and David\u2019s paper \u201cWav2Gloss: Generating Interlinear Glossed Text from Speech\u201d (collaboration...",description:"",section:"News"},{id:"news-congratulations-to-liang-leon-lu-for-winning-the-acl2024-best-paper-award-non-publicized-with-his-paper-semisupervised-neural-proto-language-reconstruction-joint-work-with-peirong-xie-and-david-mortensen",title:"Congratulations to Liang (Leon) Lu for winning the ACL2024 Best Paper Award (Non-Publicized)...",description:"",section:"News"},{id:"news-david-mortensen-gave-a-clsp-seminar-at-johns-hopkins-university",title:"David Mortensen gave a CLSP Seminar at Johns Hopkins University.",description:"",section:"News"},{id:"news-congratulations-to-kalvin-chang-and-david-mortensen-for-winning-an-honorable-mention-best-paper-award-at-the-interspeech-2024-responsible-speech-foundation-models-special-session-with-their-paper-self-supervised-speech-representations-still-struggle-with-african-american-vernacular-english-joint-work-with-yi-hui-chou-hsuan-ming-chen-jiatong-shi-nicole-holliday-and-odette-scharenborg",title:"Congratulations to Kalvin Chang and David Mortensen for winning an Honorable Mention, Best...",description:"",section:"News"},{id:"news-our-paper-zero-shot-cross-lingual-ner-using-phonemic-representations-for-low-resource-languages-was-accepted-to-emnlp-2024-main-and-mitigating-the-linguistic-gap-with-phonemic-representations-for-robust-cross-lingual-transfer-was-accepted-to-mrl-2024",title:"Our paper \u201cZero-Shot Cross-Lingual NER Using Phonemic Representations for Low-Resource Languages\u201d was accepted...",description:"",section:"News"},{id:"news-david-mortensen-will-give-a-talk-as-part-of-the-university-of-pittsburgh-colloquium-series",title:"David Mortensen will give a talk as part of the University of Pittsburgh...",description:"",section:"News"},{id:"news-derivational-morphology-reveals-analogical-generalization-in-large-language-models-leonie-and-david-s-collaboration-with-valentin-hofmann-hinrich-sch\xfctze-and-janet-b-pierrehumbert-was-accepted-to-proceedings-of-the-national-academy-of-sciences",title:"\u201cDerivational morphology reveals analogical generalization in large language models\u201d (Leonie and David\u2019s collaboration...",description:"",section:"News"},{id:"news-david-gave-a-keynote-on-computational-historical-linguistics-at-midwest-speech-and-language-days-2025",title:"David gave a keynote on computational historical linguistics at Midwest Speech and Language...",description:"",section:"News"},{id:"news-kwanghee-eunjung-kalvin-and-david-s-paper-leveraging-allophony-in-self-supervised-speech-models-for-atypical-pronunciation-assessment-was-accepted-to-naacl-2025-main",title:"Kwanghee, Eunjung, Kalvin, and David\u2019s paper \u201cLeveraging Allophony in Self-Supervised Speech Models for...",description:"",section:"News"},{id:"news-programming-by-example-meets-historical-linguistics-a-large-language-model-based-approach-to-sound-law-induction-dialup-modeling-the-language-continuum-by-adapting-models-to-dialects-and-dialects-to-models-nate-and-david-s-collaboration-with-niyati-bafna-emily-chang-kenton-murray-david-yarowsky-and-hale-sirin-zipa-a-family-of-efficient-models-for-multilingual-phone-recognition-collaboration-with-jian-zhu-farhan-samir-eleanor-chodroff-were-accepted-to-acl-2025-main",title:"\u201cProgramming by Example meets Historical Linguistics: A Large Language Model Based Approach to...",description:"",section:"News"},{id:"news-eunjung-and-david-s-journal-paper-applications-of-artificial-intelligence-for-cross-language-intelligibility-assessment-of-dysarthric-speech-was-accepted-to-perspectives-of-the-asha-sig-19",title:"Eunjung and David\u2019s journal paper \u201cApplications of Artificial Intelligence for Cross-language Intelligibility Assessment...",description:"",section:"News"},{id:"news-atharva-naik-kexun-zhang-nate-aravind-mysore-clayton-marr-hong-sng-rebecca-byrnes-anna-kalvin-and-david-released-a-pre-print-can-large-language-models-code-like-a-linguist-a-case-study-in-low-resource-sound-law-induction",title:"Atharva Naik, Kexun Zhang, Nate, Aravind Mysore, Clayton Marr, Hong Sng, Rebecca Byrnes,...",description:"",section:"News"},{id:"news-chin-jou-eunjung-kwanghee-and-david-s-paper-towards-inclusive-asr-investigating-voice-conversion-for-dysarthric-speech-recognition-in-low-resource-languages-was-accepted-to-interspeech-2025",title:"Chin-jou, Eunjung, Kwanghee, and David\u2019s paper \u201cTowards Inclusive ASR: Investigating Voice Conversion for...",description:"",section:"News"},{id:"news-david-mortensen-shinji-watanabe-and-jonathan-amith-have-received-an-nsf-grant-to-leverage-systematic-patterns-among-related-languages-and-dialects-to-improve-asr-for-low-resource-varieties",title:"David Mortensen, Shinji Watanabe, and Jonathan Amith have received an NSF grant to...",description:"",section:"News"},{id:"news-changeling-lab-member-brendon-boldt-will-present-two-papers-in-the-main-session-of-emnlp-2025-in-suzhou-morpheme-induction-for-emergent-language-and-searching-for-the-most-human-like-emergent-language",title:"Changeling Lab member Brendon Boldt will present two papers in the main session...",description:"",section:"News"},{id:"news-david-will-give-the-colloquium-talk-at-lti-the-reconstruction-will-not-be-supervised",title:"David will give the Colloquium talk at LTI, \u201cThe Reconstruction Will Not Be...",description:"",section:"News"},{id:"news-powsm-a-phonetic-open-whisper-style-speech-foundation-model-main-top-5-prism-benchmarking-phone-realization-in-speech-models-main-communicating-in-emergent-language-with-an-induced-morphological-phrasebook-main-b-d-t-p-self-supervised-speech-models-discover-phonological-vector-arithmetic-findings-linear-script-representations-in-speech-foundation-models-enable-zero-shot-transliteration-findings-pbebench-a-multi-step-programming-by-examples-reasoning-benchmark-inspired-by-historical-linguistics-findings-were-accepted-to-acl-2026",title:"POWSM: A Phonetic Open Whisper-Style Speech Foundation Model (main, top 5%) PRiSM: Benchmarking...",description:"",section:"News"},{id:"projects-systematic-relationships-for-improved-asr",title:"Systematic Relationships for Improved ASR",description:"Better ASR for low resource varieties",section:"Projects",handler:()=>{window.location.href="/projects/10_systematic/"}},{id:"projects-universal-phone-recognition",title:"Universal Phone Recognition",description:"Recognizing phonetic units in a language-neural fashion",section:"Projects",handler:()=>{window.location.href="/projects/10_universal/"}},{id:"projects-implicit-and-explicit-reasoning-in-llms",title:"Implicit and Explicit Reasoning in LLMs",description:"Do LLMs introspect?",section:"Projects",handler:()=>{window.location.href="/projects/11_blocking/"}},{id:"projects-fbcc-benchmark",title:"FBCC Benchmark",description:"Evaluating the ability of code language models to generalize and plan based on examples",section:"Projects",handler:()=>{window.location.href="/projects/12_benchmark/"}},{id:"projects-automating-comparative-reconstruction",title:"Automating Comparative Reconstruction",description:"Work on developing models that reconstruct protolanguages based on collections of cognate sets",section:"Projects",handler:()=>{window.location.href="/projects/1_automating/"}},{id:"projects-historical-linguistics-as-code-generation",title:"Historical Linguistics as Code Generation",description:"Modeling phonological reconstruction as a code generation problem using LLMs",section:"Projects",handler:()=>{window.location.href="/projects/2_codegen/"}},{id:"projects-emergent-language-corpus-collection",title:"Emergent Language Corpus Collection",description:"Building a collection of corpora from emergent communication systems",section:"Projects",handler:()=>{window.location.href="/projects/3_elcc/"}},{id:"projects-wuggpt",title:"WugGPT",description:"Evaluating the morphological capabilities of Large Language Models",section:"Projects",handler:()=>{window.location.href="/projects/4_wuggpt/"}},{id:"projects-xferbench",title:"XferBench",description:"Evaluating Emergent Communication Systems with Downstream Tasks",section:"Projects",handler:()=>{window.location.href="/projects/5_xferbench/"}},{id:"projects-lexical-change",title:"Lexical Change",description:"Corpus approaches to changes in lexicons",section:"Projects",handler:()=>{window.location.href="/projects/6_lexical_change/"}},{id:"projects-atypical-speech-assessment",title:"Atypical Speech Assessment",description:"Assessing the degree to which speech is atypical",section:"Projects",handler:()=>{window.location.href="/projects/7_evaluating/"}},{id:"projects-phonological-representations-for-nlp",title:"Phonological Representations for NLP",description:"Leveraging phonological representations for NLP tasks",section:"Projects",handler:()=>{window.location.href="/projects/8_phonology_for_nlp/"}},{id:"projects-blocking-in-llms",title:"Blocking in LLMs",description:"Do LLMs know the badness of badity?",section:"Projects",handler:()=>{window.location.href="/projects/9_implicit_explicit/"}},{id:"socials-email",title:"Send email",section:"Socials",handler:()=>{window.open("mailto:%64%6D%6F%72%74%65%6E%73@%63%73.%63%6D%75.%65%64%75","_blank")}},{id:"socials-orcid",title:"ORCID",section:"Socials",handler:()=>{window.open("https://orcid.org/0000-0002-3927-6851","_blank")}},{id:"socials-google-scholar",title:"Google Scholar",section:"Socials",handler:()=>{window.open("https://scholar.google.com/citations?user=2iS5aeoAAAAJ&hl","_blank")}},{id:"socials-semantic-scholar",title:"Semantic Scholar",section:"Socials",handler:()=>{window.open("https://www.semanticscholar.org/author/3407646","_blank")}},{id:"socials-github",title:"GitHub",section:"Socials",handler:()=>{window.open("https://github.com/changelinglab","_blank")}},{id:"socials-x",title:"X",description:"Twitter",section:"Socials",handler:()=>{window.open("https://twitter.com/dmort27","_blank")}},{id:"socials-rss",title:"RSS Feed",section:"Socials",handler:()=>{window.open("/feed.xml","_blank")}},{id:"light-theme",title:"Change theme to light",description:"Change the theme of the site to Light",section:"Theme",handler:()=>{setThemeSetting("light")}},{id:"dark-theme",title:"Change theme to dark",description:"Change the theme of the site to Dark",section:"Theme",handler:()=>{setThemeSetting("dark")}},{id:"system-theme",title:"Use system default theme",description:"Change the theme of the site to System Default",section:"Theme",handler:()=>{setThemeSetting("system")}}];</script> <script src="/assets/js/shortcut-key.js?6f508d74becd347268a7f822bca7309d"></script> </body> </html>