Close Menu

    Subscribe to Updates

    Get the latest headlines from PapaLinc about news & entertainment.

    What's Hot

    Kelvin Nkrumah lands Hull City trial after Black Stars call-up

    Factors Influencing CT Tech Salary Growth

    Man blasts ‘sexist’ sentencing after scorned ex who sent revenge porn to his mother in stalking campaign avoids prison

    Facebook X (Twitter) Instagram
    • Lifestyle
    • Africa News
    • International
    Facebook X (Twitter) Instagram YouTube WhatsApp
    PapaLincPapaLinc
    • News
      • Africa News
      • International
    • Entertainment
      • Lifestyle
      • Movies
      • Music
    • Politics
    • Sports
    Subscribe
    PapaLincPapaLinc
    You are at:Home»News»Africa News»Chatbots are now prescribing psychiatric drugs
    Africa News

    Chatbots are now prescribing psychiatric drugs

    Papa LincBy Papa LincApril 3, 2026No Comments12 Mins Read3 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Chatbots are now prescribing psychiatric drugs
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email


    The state of Utah has embarked on a pioneering, yet contentious, journey into the future of healthcare, permitting an artificial intelligence (AI) system to prescribe psychiatric medications without direct human physician oversight. This marks only the second instance across the United States where a state has formally delegated such a significant degree of clinical authority to an AI entity. While state officials champion this move as a potential panacea for escalating healthcare costs and pervasive care shortages, a chorus of physicians and mental health experts voices profound warnings, citing the system’s inherent opacity, considerable risks, and its likely failure to genuinely broaden mental health access for those who need it most.

    The Utah Experiment: Pioneering AI in Psychiatric Care

    This groundbreaking initiative is structured as a one-year pilot program, publicly announced last week. Central to this experiment is Legion Health, a San Francisco-based startup whose AI chatbot will be responsible for renewing specific types of psychiatric medication prescriptions. The company touts “fast, simple refills” for Utah-based patients through a monthly subscription fee of $19. While the program is slated to commence operations at some point in April, Legion Health is currently managing a waitlist for prospective users. This pilot represents a significant leap, pushing the boundaries of AI integration into the sensitive domain of mental health pharmacology.

    A Deliberately Narrow Scope for High-Stakes Automation

    Crucially, the program’s design is deliberately constrained, a measure taken to mitigate the inherent risks of delegating such authority to AI. Both the range of medications covered and the eligibility criteria for patients are meticulously defined. According to the agreement between Legion Health and Utah’s Office of Artificial Intelligence Policy, the chatbot is limited to renewing only 15 types of lower-risk maintenance medications. These are drugs that have already been initially prescribed by a qualified human clinician, ensuring a baseline of prior medical evaluation.

    Among the medications approved for AI-driven renewal are widely recognized antidepressants and anxiolytics such as fluoxetine (Prozac), sertraline (Zoloft), bupropion (Wellbutrin), mirtazapine, and hydroxyzine, commonly prescribed for anxiety and depression. Patient eligibility is equally stringent: individuals must be deemed clinically stable. This means anyone who has experienced a recent change in medication dosage, a switch to a different medication, or a psychiatric hospitalization within the past year is explicitly excluded from the pilot. Furthermore, a crucial human oversight mechanism is built in: patients are required to check in with a human healthcare provider after every ten refills or six months, whichever benchmark is reached first.

    Exclusions: Where Human Oversight Remains Paramount

    The system’s limitations are as telling as its capabilities. The AI chatbot is expressly prohibited from issuing new prescriptions, underscoring the state’s cautious approach to initial diagnosis and treatment planning, which remains firmly in the hands of human professionals. It is also barred from managing medications that necessitate close clinical supervision, such as drugs requiring regular blood-test monitoring to ensure safety and efficacy.

    Perhaps most significantly, controlled substances are entirely excluded from the pilot’s scope. This immediately rules out a broad array of medications used to treat conditions like ADHD, which often fall under this classification due to their potential for abuse or diversion. The exclusion extends further to benzodiazepines, commonly used for acute anxiety; antipsychotics, vital for conditions like schizophrenia and bipolar disorder; and lithium, widely regarded as the gold-standard treatment for bipolar disorder. These deliberate exclusions highlight the pilot’s conservative nature, effectively sidelining many of the more complex and higher-risk psychiatric cases, where nuanced human judgment is considered indispensable.

    The Patient’s Journey: Interacting with the AI System

    For eligible patients wishing to utilize Legion Health’s service, a structured process is in place. First, patients must actively opt-in to the program. They are then required to verify their identity and provide proof of an existing, valid prescription, typically by submitting a photograph of the medication label or pill bottle. Following these preliminary steps, the AI chatbot engages patients with a series of questions. These inquiries delve into their current symptoms, any experienced side effects, and the perceived efficacy of their medication.

    Crucially, the system also incorporates critical safety questions designed to flag potential risks. Patients are asked about suicidal thoughts, instances of self-harm, severe adverse reactions, and pregnancy status. If any of a patient’s responses deviate from the pilot’s carefully defined low-risk criteria, the system is programmed to immediately escalate the case to a human clinician for review before any refill can be issued. This human intervention acts as a vital safety net. Additionally, both patients and pharmacists retain the ability to request a human review at any point, providing another layer of oversight.

    The Promise of Innovation: State and Company Perspectives

    State officials in Utah have articulated clear aspirations for the pilot program. Upon its announcement, they stated, “By safely automating the renewal process for maintenance medications, we are allowing patients to get the care they need much more quickly and affordably.” They envision that, over time, this automation will liberate human healthcare providers, enabling them to “focus their time on more complex, higher-risk patient needs.” This reallocation of resources is seen as a strategic move to address the severe mental health care shortages plaguing Utah, where an estimated 500,000 residents currently lack adequate access to mental health services.

    Legion cofounder and CEO Yash Patel has amplified these ambitions, describing the program in even more expansive terms. He has publicly characterized it as a “global first,” predicting that it will dramatically expand access to healthcare and mark “the beginning of something much bigger than refills.” This rhetoric suggests a vision far beyond mere prescription renewals, hinting at a transformative role for AI in the broader healthcare landscape.

    Professional Skepticism: Unpacking the Concerns of Psychiatrists

    Despite the optimistic pronouncements from state officials and Legion Health, the psychiatric community remains largely unconvinced. Dr. Brent Kious, a psychiatrist and professor at the University of Utah School of Medicine, expressed his reservations to The Verge, suggesting that the “advantages of an AI-based refill system may be overstated.” He harbors significant doubt that the tool “will not increase access for those who are most in need of care.” Dr. Kious highlights a critical point: the target patients for this service are those already on a stable treatment plan with an existing psychiatrist, implying that they already have some level of access to care.

    The Risk of Over-treatment and Lack of Active Management

    Dr. Kious further cautions that such automation could inadvertently contribute to what he terms an “epidemic of over-treatment” in psychiatry. He argues that by making refills exceptionally easy, some patients might remain on medication longer than clinically necessary. This concern is echoed by Dr. John Torous, Director of Digital Psychiatry at Beth Israel Deaconess Medical Center and a professor of psychiatry at Harvard Medical School. Dr. Torous notes that while some individuals indeed benefit from long-term psychiatric medication, others may benefit more from carefully managed reductions or discontinuation of their prescriptions. He stresses that these decisions “require more active management, changes, and careful consideration,” a level of nuanced oversight that becomes significantly harder to achieve when routine check-ins are outsourced to a chatbot.

    Safety, Transparency, and the Opaque Nature of AI

    A more profound concern among mental health professionals revolves around the fundamental safety of automating even seemingly routine aspects of psychiatric care. Dr. Torous questions whether any AI system currently available can truly “understand the unique context and factors that go into a person’s medication plan,” emphasizing that prescribing extends far beyond simple drug interaction checks. Dr. Kious concurs, stating that while such a system “could be safe in principle, it all depends on the details.”

    These concerns are magnified by the nascent stage of these AI systems and their notorious lack of transparency to external scrutiny. Dr. Kious starkly describes the current situation as feeling “a bit like alchemy right now.” He advocates for “greater transparency, more science, and more rigorous testing before people are asked to use this,” underscoring the critical need for robust validation before widespread adoption.

    Screening Pitfalls and the Nuances of Human Interaction

    Beyond the broad conceptual concerns, immediate safety issues are also raised. Dr. Kious points out the potential for a chatbot to miss crucial information during the screening process. The AI might fail to ask the right questions, a patient might not accurately recognize or report a side effect, or they might simply provide inaccurate answers. Some patients, eager to expedite their care, might even intentionally tell the system “what it wants to hear.”

    While acknowledging that psychiatry often relies on patient self-report, Dr. Kious emphasizes that human clinicians typically access a broader range of information. When interacting with patients, he pays attention not only to verbal communication but also to non-verbal cues, what is left unsaid, and the overall presentation of the patient. Although patients can also mislead human providers, Dr. Kious suggests that a chatbot system might inadvertently make it easier for patients to manipulate their answers to achieve a desired outcome, potentially compromising their safety.

    Lessons from Precedent: The Doctronic Incident

    Dr. Torous also highlights more overt safety risks, drawing parallels to incidents already observed with other chatbots in real-world applications. Legion Health’s pilot is not Utah’s first foray into AI prescribing; it follows an earlier, broader pilot focused on primary care with a company called Doctronic, which launched last December. Disturbingly, within weeks of Doctronic’s system going live, security researchers managed to exploit its vulnerabilities. They successfully prompted the system to propagate vaccine conspiracy theories, generate instructions for manufacturing methamphetamine, and even recommend tripling a patient’s opioid dosage. While state officials emphasize that the Legion program is more focused and specifically designed to address Utah’s mental health shortage, the Doctronic incident serves as a stark reminder of the potential for unintended and dangerous outcomes when AI is deployed in sensitive medical contexts.

    Legion’s Safeguards and Future Ambitions

    In response to these concerns, Legion Health maintains that its pilot operates under stringent guardrails. The company points to its “conservative eligibility gates” as a primary line of defense. Furthermore, its agreement with the state of Utah mandates detailed monthly reports and requires close human physician review for the first 1,250 prescription requests. Following this initial phase, a periodic sampling of approximately 5 to 10 percent of subsequent requests will also be reviewed by human clinicians.

    Arthur MacWaters, Legion’s cofounder and president, emphasized to The Verge that “risks exist in any remote care model, whether AI-assisted or fully human-led.” He stressed that the company’s “workflow does not rely on a single self-reported answer to unlock treatment.” MacWaters highlighted key safeguards, including the pilot’s narrow limits on medications and patient eligibility, built-in AI safety screens, the involvement of pharmacists, and the crucial ability to escalate cases to a human clinician when necessary. He articulated the company’s perspective: “We see this as critical to expand access to hundreds of thousands of people in Utah who live in mental health shortage areas, as well as an important proving ground for AI in medicine.”

    Despite the current focus on Utah, Legion Health harbors broader ambitions. While MacWaters refrained from commenting on specific timelines or expansions to other states, he expressed excitement for “what the future holds.” Publicly, both MacWaters and the company have signaled significant expansion plans, with Legion’s refill site stating the service will be available “nationwide 2026,” and MacWaters having previously suggested it “will be in every state very very quickly.”

    The Fundamental Question: What Problem is Being Solved?

    For the psychiatrists interviewed, the entire initiative seems to circle back to a rather fundamental question: What problem is Legion Health truly solving? Dr. Kious points out that for established patients on stable medication regimens, obtaining a refill often doesn’t even necessitate an appointment. He elaborates that most psychiatrists are typically “happy to refill prescriptions for free and without an appointment” unless there are specific concerns about the patient’s well-being or if the medication itself carries a significant inherent risk. Paradoxically, these very scenarios—complex cases, new prescriptions, or medications with higher risks—are precisely what Legion’s AI system is explicitly barred from handling. This raises doubts about whether the AI truly addresses the most pressing access gaps or merely automates a process that is already relatively straightforward for a subset of patients.

    Ultimately, the consensus among the skeptical medical professionals is one of caution. Dr. Torous advises, “I would personally avoid it for now,” suggesting that if a patient has found an effective treatment plan with their current clinician, it is likely best to maintain that existing relationship.

    Conclusion

    The Utah pilot program allowing AI chatbots to renew psychiatric prescriptions represents a significant frontier in the integration of artificial intelligence into healthcare. Driven by the promise of reducing costs and alleviating severe mental health care shortages, this initiative, spearheaded by Legion Health, seeks to automate the low-risk maintenance medication refill process. However, this bold step is met with considerable apprehension from the psychiatric community. Experts like Dr. Brent Kious and Dr. John Torous raise critical concerns regarding the system’s transparency, the potential for over-treatment, its inability to provide nuanced patient management, and the inherent safety risks associated with AI in a field that relies heavily on human judgment, empathy, and the interpretation of subtle cues. While Legion Health has implemented safeguards and the program is narrowly scoped, the precedent of earlier AI pilot failures in Utah, coupled with the fundamental question of whether the AI truly addresses the core access issues for those most in need, casts a shadow of caution over its ambitious expansion plans. The tension between technological innovation and patient safety, transparency, and effective clinical practice will undoubtedly define the future trajectory of AI in mental healthcare. For now, the medical community urges prudence and rigorous evaluation before embracing such a transformative shift.


    Post Views: 4



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleVictims of avalanche that killed nine people were wearing special ‘airbag’ backpacks that failed to activate
    Next Article Kotoko coach Prince Owusu confident ahead of Samartex clash
    Papa Linc

    Related Posts

    Factors Influencing CT Tech Salary Growth

    April 3, 2026

    A Deep Dive into the Platform’s Evolution

    April 3, 2026

    Apple’s Best Product Ever

    April 3, 2026
    Ads
    Top Posts

    Secret code break that ‘solved’ the Zodiac killer case: Expert who unmasked single suspect behind two of America’s darkest murders tells all on bombshell investigation

    December 24, 2025131 Views

    Tech entrepreneur uses ChatGPT to create a personalised cancer vaccine for his DOG – and the breakthrough could soon help humans too

    March 14, 2026102 Views

    Newsreader Sandy Gall personally lobbied Margaret Thatcher’s government to back the Mujahideen

    July 4, 202589 Views

    Night Of The Samurai Grand Arrivals Gallery » December 23, 2025

    December 24, 202559 Views
    Don't Miss
    Sports April 3, 2026

    Kelvin Nkrumah lands Hull City trial after Black Stars call-up

    Young Medeama SC winger Kelvin Nkrumah is currently on trial at the academy of English…

    Factors Influencing CT Tech Salary Growth

    Man blasts ‘sexist’ sentencing after scorned ex who sent revenge porn to his mother in stalking campaign avoids prison

    Kotoko coach Prince Owusu confident ahead of Samartex clash

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • WhatsApp

    Subscribe to Updates

    Get the latest headlines from PapaLinc about news & entertainment.

    Ads
    About Us
    About Us

    Your authentic source for news and entertainment.
    We're accepting new partnerships right now.

    Email Us: info@papalinc.com
    For Ads on our website and social handles.
    Email Us: ads@papalinc.com
    Contact: +1-718-924-6727

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Kelvin Nkrumah lands Hull City trial after Black Stars call-up

    Factors Influencing CT Tech Salary Growth

    Man blasts ‘sexist’ sentencing after scorned ex who sent revenge porn to his mother in stalking campaign avoids prison

    Most Popular

    Augustina Ama Tabuah donates t-shirts to John Mahama, Kofi Arko Nokoe

    October 20, 20240 Views

    Bill Asamoah, Ship Dealer, others light up 13th 3G Awards in New York

    October 21, 20240 Views

    Ghanaians’ taxes are not linked to my private parts – MC Yeboah tackles promiscuity claims

    October 21, 20240 Views
    © 2026 PapaLinc. Designed by LiveTechOn LLC.
    • News
      • Africa News
      • International
    • Entertainment
      • Lifestyle
      • Movies
      • Music
    • Politics
    • Sports

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.