Technological nature of language and the implications for health professions education
Metadata
- Author: Michael Rowe (ORCID)
- Created: March 27, 2025
- Keywords: artificial intelligence, editor, emergent scholarship, generative AI, LLM, journal, publication, research
- License: Creative Commons Attribution 4.0 International
- DOI: Why no DOI?
Abstract
This essay examines language as humanity's first general purpose technology—a system deliberately developed to extend human capabilities across domains and enable complementary innovations. Through this conceptual lens, large language models (LLMs) emerge not merely as new digital tools, but as a significant evolution in the continuum of language technologies that stretches from spoken language through writing, printing, and digital text. The essay explores how LLMs extend language's core capabilities through unprecedented scale, cross-domain synthesis, adaptability, and emerging multimodality. These extensions are particularly relevant to health professions education, where students face the dual challenge of information overload and inadequate preparation for complex practice environments. By viewing LLMs as an evolution of our most fundamental technology rather than simply new applications, we can better understand their implications for clinical education. This perspective suggests shifting educational emphasis from knowledge acquisition to clinical reasoning and adaptive expertise, developing new forms of "AI literacy" specific to healthcare contexts, and reimagining assessment approaches. Understanding LLMs as part of language's ongoing evolution offers a nuanced middle path between uncritical enthusiasm and reflexive resistance, informing thoughtful integration that enhances rather than diminishes the human dimensions of healthcare education.
Emergent Scholarship context
This essay explores how language—humanity's first general purpose technology—has evolved through various technological innovations, culminating in large language models (LLMs). This perspective embodies emergent scholarship's core principles by reconceptualising knowledge across traditional boundaries. By examining language as technology, we move beyond disciplinary silos, creating connections between linguistics, cognitive science, educational theory, and clinical practice. This exploration demonstrates how valuable insights emerge at intersections of different perspectives and illustrates how knowledge is socially constructed through accumulated interactions across generations. The essay's analysis of LLMs as an extension of language technology rather than merely a computing innovation highlights emergent scholarship's emphasis on recognising complementary innovations that build upon existing foundations while opening new possibilities for knowledge creation and sharing.
Language as technology
We typically think of technologies as physical tools or digital systems—the wheel, the printing press, the smartphone. Yet this conception is unnecessarily narrow. Technology, at its core, is the application of knowledge for practical purposes and the extension of human capabilities. Through this lens, language itself emerges as perhaps humanity's first and most transformative technology.
Language is a sophisticated system we've collectively developed and refined over millennia. It consists of structured rule systems (grammar, syntax) and symbolic representations (words, phonemes) that allow us to encode, transmit, and decode meaning with remarkable precision. While our capacity for language may be biologically innate, specific languages are cultural technologies—inventions we've deliberately crafted and continuously improved.
What distinguishes human language from animal communication is precisely its technological nature. Where animal communication systems are largely instinctual and limited in scope, human language is generative, recursive, and infinitely creative. We can discuss the past and future, create hypotheticals, build abstract concepts, and communicate about communication itself—features that mark language as a technological achievement rather than merely a biological trait.
The philosopher Andy Clark describes language as "the ultimate artifact," noting that "words enable us to objectify our own thoughts and to reason about them." Through this technology, we've extended our individual cognitive capacities far beyond biological limitations, creating what philosopher Daniel Dennett calls the "tools for thinking" that have defined human progress.
Language as a General Purpose Technology
General purpose technologies (GPTs) share three key characteristics: they have economy-wide effects, applications across numerous domains, and enable complementary innovations. The steam engine, electricity, and computing are commonly cited examples. Language, when viewed through this framework, stands as the original GPT—one that has transformed human civilisation more fundamentally than any subsequent innovation.
Language's economy-wide effects are evident throughout history. It enabled the division of labour, establishment of trade networks, and economic coordination necessary for complex societies to develop. From hunting parties in early human groups to modern global supply chains, economic activity depends on our ability to communicate intentions, coordinate actions, and establish shared understanding of value.
The applications of language span virtually every domain of human endeavour. Science, governance, art, education, and social organisation all depend fundamentally on language. No other technology has such universal application across human activities.
Perhaps most significantly, language has enabled countless complementary innovations. Writing systems extended language across time and space. Mathematics provided precise symbolic language for quantitative relationships. Legal systems codified social norms through linguistic frameworks. The scientific method established formalised language for empirical knowledge. Each of these represents a complementary innovation built upon the foundation of language as a technology.
Language's most powerful feature as a GPT is enabling cumulative knowledge beyond individual lifespans. Unlike other species, humans build knowledge progressively because language allows us to preserve discoveries, share techniques, and collectively refine ideas across generations. This intergenerational knowledge transfer, facilitated by language, has driven human progress in a way no other technology has matched.
LLMs as an evolution of language technology
The history of language technology shows a clear progression: spoken language → writing → printing → telecommunications → digital text → and now, large language models (LLMs). Each transition extended the capabilities of language in new dimensions.
Rather than viewing LLMs merely as new computing applications, we can understand them as the latest evolution in our oldest and most fundamental technology—language itself. LLMs extend language capabilities in several significant ways:
First, LLMs overcome scale limitations of individual human language use. Where a person can process and generate language within the constraints of individual cognitive capacity, LLMs can work with language at volumes that would be impossible for any individual, processing billions of examples to identify patterns and generate new text.
Second, LLMs extend language's synthetic capabilities. Human experts typically specialise in particular domains, with linguistic fluency limited to their areas of expertise. LLMs can bridge previously siloed domains, making connections across disciplines and generating insights that might be difficult for individual humans to discover due to specialisation barriers.
Third, LLMs add adaptability to language. They can shift between linguistic styles, domains, and registers with a flexibility that would require years of immersion for human language users to develop. This enables personalisation of communication to specific audiences, contexts, and prior understanding in ways traditional language use cannot easily achieve.
Finally, as LLMs incorporate multimodal capabilities—understanding and generating not just text but images, audio, and eventually other forms of representation—they bridge previously separate symbolic systems, enabling new forms of expression and communication that extend language's fundamental purpose.
LLMs extend our capacity to apply knowledge
As an evolution of language technology, LLMs extend our capacity to apply knowledge for practical purposes in several unique ways:
Knowledge integration represents a core capability. Traditional language use is constrained by individual expertise and domain boundaries. LLMs can bridge disciplines, drawing connections between concepts from disparate fields and potentially enabling insights that would be difficult for specialists working within established boundaries to discover.
LLMs also extend language's collaborative potential. Traditional language enables small group collaboration, but LLMs effectively incorporate linguistic patterns from billions of examples, scaling collaboration beyond what direct human interaction allows. This offers a form of asynchronous, distributed collaboration across both space and time.
Perhaps most significantly, LLMs offer potential for cognitive extension—allowing humans to offload certain linguistic and informational tasks, thus freeing cognitive resources for higher-order thinking. Just as writing allowed us to externalise memory, LLMs may allow us to externalise certain aspects of information processing, search, and synthesis.
Implications for health professions education
Health professions education has always evolved alongside language technologies. The rise of writing gave birth to formal medical training; printing enabled standardised textbooks; digital technologies facilitated evidence-based practice through improved information access. The emergence of LLMs as an evolution of language technology similarly requires us to rethink fundamental aspects of how we prepare healthcare professionals.
If language is our first GPT, and LLMs represent a significant evolution of this technology, we must reconsider what knowledge matters most in clinical education. Traditional health professions education often emphasises information acquisition and recall—functions that language technologies have progressively externalised, from medical textbooks to clinical databases to point-of-care tools, and now to LLMs. This suggests shifting educational focus from information retrieval toward clinical reasoning, interprofessional communication, and adaptive expertise.
Just as previous language technologies required new forms of literacy, effectively working with LLMs requires developing what we might call "AI literacy in healthcare"—understanding how to prompt effectively for clinical information, evaluate outputs critically against evidence-based standards, and integrate AI-generated content appropriately into clinical decision-making. This represents not a replacement of traditional clinical competencies but an extension of them in response to evolving language technology.
Health professions education may increasingly focus on developing capacities for "collaborative intelligence"—preparing clinicians to work with intelligent systems as partners in clinical reasoning and problem-solving, rather than treating them simply as tools or information sources. This shift from viewing AI as a substitute for clinical capabilities to seeing it as a complement aligns with how we've historically integrated language technologies into healthcare.
Clinical assessment faces particular challenges when LLMs can generate case analyses, treatment plans, and patient communications at a sophisticated level. This may accelerate the shift toward more authentic, practice-based assessments that better reflect real-world application of clinical knowledge—a shift many health professions educators have advocated for long before AI entered the equation.
The integration of AI literacy into health professions curricula requires thoughtful consideration of where these tools can enhance the development of clinical reasoning and where they might undermine essential learning processes. This parallels historical transitions in how previous language technologies were integrated into clinical education, from concerns about "book learning" versus apprenticeship to debates about calculator use in pharmacology courses to policies on smartphone access during clinical placements.
Conclusion
Viewing LLMs as an evolution of language—humanity's first and most transformative general purpose technology—offers a perspective that avoids both utopian and dystopian extremes in healthcare education. It positions these systems within a long continuum of language technologies that have progressively extended human capabilities while transforming how clinicians think, learn, and work together.
Each previous evolution in language technology—from writing to printing to digital text—initially provoked concern about what might be lost, while ultimately extending clinical capabilities in ways that created new possibilities for healthcare. LLMs will likely follow this pattern, offering both challenges to traditional practices and opportunities for enhancing human clinical reasoning and communication capacities.
Health professions education has historically adapted to each evolution in language technology, and will need to do so again. But rather than focusing narrowly on how to prevent students from using AI to complete traditional assignments, we might more productively consider how health professions education can evolve to help students develop the uniquely human capabilities that complement, rather than compete with, our newest language technology.
By understanding LLMs as part of the ongoing evolution of language itself, we gain a more nuanced perspective on both their limitations and their potential to extend humanity's first and most fundamental technology in powerful new dimensions—particularly in healthcare contexts where language serves as the foundation for clinical reasoning, patient communication, and interprofessional collaboration.
Emergent Scholarship reflection
This essay itself demonstrates emergent scholarship principles in action. Rather than treating language, technology, and education as separate domains, it explores their interconnections and mutual influences across time. The analysis moves beyond traditional disciplinary boundaries to identify patterns that wouldn't be visible from within any single field. By reconceptualising LLMs as extensions of language technology rather than simply as AI applications, we create space for more nuanced understanding of their implications for learning and practice.
The approach taken here embodies "Meaning through medium" by choosing a form that allows exploration of interconnected ideas across traditional boundaries. It demonstrates "Value through engagement" by inviting readers to reconsider fundamental assumptions about both language and technology. And it exemplifies "Sustainability through ecology" by situating technological developments within broader historical and social contexts.
This analysis doesn't reject traditional scholarship but rather builds upon it, drawing insights from linguistics, philosophy of technology, educational theory, and clinical practice to create a more integrated perspective. In doing so, it models how emergent scholarship can generate insights at the intersection of disciplines while respecting the foundations each provides.