Loading stock data...
Media 522e0e9c 497c 4c19 b30b d1ce4c66c5b1 133807079768883430

FDA’s Rushed AI Rollout Fails to Deliver Promised Results

FDA AI Tool Rollout Sparks Concerns Over Rushed Development and Inaccuracy

The US Food and Drug Administration (FDA) has recently unveiled an agency-wide artificial intelligence (AI) tool called Elsa, which is intended to aid FDA employees in various tasks such as clinical protocol reviews, scientific evaluations, and inspection target identification. However, the rollout of this tool has been met with concerns over its rushed development and potential inaccuracies.

Background on the Development of Elsa

Elsa is a large language model (LLM) based on Anthropic’s Claude LLM and was developed by consulting firm Deloitte. Since 2020, Deloitte has received $13.8 million to develop the original database of FDA documents that Elsa’s training data is derived from. In April, the firm was awarded a $14.7 million contract to scale the tech across the agency.

Rushed Development and Potential Inaccuracies

According to NBC News, FDA staff tested Elsa on Monday with questions about FDA-approved products or other public information, only to find that it provided summaries that were either completely or partially wrong. This has led to concerns over the accuracy of the tool, which is intended to aid in critical decision-making processes.

Staff Concerns Over Rushed Rollout

FDA staffers have expressed concerns over the rushed rollout of Elsa, with some calling it "overhyped" and "inaccurate." They argue that the tool should only be used for administrative tasks, not scientific ones. One staffer stated, "Makary and DOGE think AI can replace staff and cut review times, but it decidedly cannot."

Lack of Guardrails for Tool’s Use

The FDA has been criticized for failing to set up guardrails for the tool’s use. Staffers have expressed concerns that the agency is pushing ahead with the rollout without properly considering the potential consequences. One staffer stated, "I’m not sure in their rush to get it out that anyone is thinking through policy and use."

Previous AI Pilots and Their Fate

Prior to the rollout of Elsa, each center within the FDA was working on its own AI pilot. However, after cost-cutting in May, the AI pilot originally developed by the FDA’s Center for Drug Evaluation and Research (CDER) called CDER-GPT was selected to be scaled up to an FDA-wide version and rebranded as Elsa.

Center for Devices and Radiological Health’s AI Pilot

The Center for Devices and Radiological Health’s (CDRH) AI pilot, CDRH-GPT, is reportedly buggy and has problems uploading documents and allowing users to submit questions. This raises concerns over the reliability of the tool and its ability to accurately aid in critical decision-making processes.

Conclusion

The rollout of Elsa, the FDA’s new AI tool, has sparked concerns over its rushed development and potential inaccuracies. Staffers have expressed worries over the lack of guardrails for the tool’s use and the potential consequences of pushing ahead with its implementation without proper consideration. As the FDA continues to explore the use of AI in decision-making processes, it is essential that they prioritize accuracy, reliability, and careful planning to avoid mistakes that could compromise public health.

Rushed Development and Potential Inaccuracies

Elsa’s development has been criticized for being rushed, with some arguing that the tool should have been tested more thoroughly before its rollout. According to NBC News, FDA staff tested Elsa on Monday with questions about FDA-approved products or other public information, only to find that it provided summaries that were either completely or partially wrong.

Staff Concerns Over Rushed Rollout

FDA staffers have expressed concerns over the rushed rollout of Elsa, with some calling it "overhyped" and "inaccurate." They argue that the tool should only be used for administrative tasks, not scientific ones. One staffer stated, "Makary and DOGE think AI can replace staff and cut review times, but it decidedly cannot."

Lack of Guardrails for Tool’s Use

The FDA has been criticized for failing to set up guardrails for the tool’s use. Staffers have expressed concerns that the agency is pushing ahead with the rollout without properly considering the potential consequences. One staffer stated, "I’m not sure in their rush to get it out that anyone is thinking through policy and use."

Previous AI Pilots and Their Fate

Prior to the rollout of Elsa, each center within the FDA was working on its own AI pilot. However, after cost-cutting in May, the AI pilot originally developed by the FDA’s Center for Drug Evaluation and Research (CDER) called CDER-GPT was selected to be scaled up to an FDA-wide version and rebranded as Elsa.

Center for Devices and Radiological Health’s AI Pilot

The Center for Devices and Radiological Health’s (CDRH) AI pilot, CDRH-GPT, is reportedly buggy and has problems uploading documents and allowing users to submit questions. This raises concerns over the reliability of the tool and its ability to accurately aid in critical decision-making processes.

Conclusion

The rollout of Elsa, the FDA’s new AI tool, has sparked concerns over its rushed development and potential inaccuracies. Staffers have expressed worries over the lack of guardrails for the tool’s use and the potential consequences of pushing ahead with its implementation without proper consideration. As the FDA continues to explore the use of AI in decision-making processes, it is essential that they prioritize accuracy, reliability, and careful planning to avoid mistakes that could compromise public health.

Rushed Development and Potential Inaccuracies

Elsa’s development has been criticized for being rushed, with some arguing that the tool should have been tested more thoroughly before its rollout. According to NBC News, FDA staff tested Elsa on Monday with questions about FDA-approved products or other public information, only to find that it provided summaries that were either completely or partially wrong.

Staff Concerns Over Rushed Rollout

FDA staffers have expressed concerns over the rushed rollout of Elsa, with some calling it "overhyped" and "inaccurate." They argue that the tool should only be used for administrative tasks, not scientific ones. One staffer stated, "Makary and DOGE think AI can replace staff and cut review times, but it decidedly cannot."

Lack of Guardrails for Tool’s Use

The FDA has been criticized for failing to set up guardrails for the tool’s use. Staffers have expressed concerns that the agency is pushing ahead with the rollout without properly considering the potential consequences. One staffer stated, "I’m not sure in their rush to get it out that anyone is thinking through policy and use."

Previous AI Pilots and Their Fate

Prior to the rollout of Elsa, each center within the FDA was working on its own AI pilot. However, after cost-cutting in May, the AI pilot originally developed by the FDA’s Center for Drug Evaluation and Research (CDER) called CDER-GPT was selected to be scaled up to an FDA-wide version and rebranded as Elsa.

Center for Devices and Radiological Health’s AI Pilot

The Center for Devices and Radiological Health’s (CDRH) AI pilot, CDRH-GPT, is reportedly buggy and has problems uploading documents and allowing users to submit questions. This raises concerns over the reliability of the tool and its ability to accurately aid in critical decision-making processes.

Conclusion

The rollout of Elsa, the FDA’s new AI tool, has sparked concerns over its rushed development and potential inaccuracies. Staffers have expressed worries over the lack of guardrails for the tool’s use and the potential consequences of pushing ahead with its implementation without proper consideration. As the FDA continues to explore the use of AI in decision-making processes, it is essential that they prioritize accuracy, reliability, and careful planning to avoid mistakes that could compromise public health.

Rushed Development and Potential Inaccuracies

Elsa’s development has been criticized for being rushed, with some arguing that the tool should have been tested more thoroughly before its rollout. According to NBC News, FDA staff tested Elsa on Monday with questions about FDA-approved products or other public information, only to find that it provided summaries that were either completely or partially wrong.

Staff Concerns Over Rushed Rollout

FDA staffers have expressed concerns over the rushed rollout of Elsa, with some calling it "overhyped" and "inaccurate." They argue that the tool should only be used for administrative tasks, not scientific ones. One staffer stated, "Makary and DOGE think AI can replace staff and cut review times, but it decidedly cannot."

Lack of Guardrails for Tool’s Use

The FDA has been criticized for failing to set up guardrails for the tool’s use. Staffers have expressed concerns that the agency is pushing ahead with the rollout without properly considering the potential consequences. One staffer stated, "I’m not sure in their rush to get it out that anyone is thinking through policy and use."

Previous AI Pilots and Their Fate

Prior to the rollout of Elsa, each center within the FDA was working on its own AI pilot. However, after cost-cutting in May, the AI pilot originally developed by the FDA’s Center for Drug Evaluation and Research (CDER) called CDER-GPT was selected to be scaled up to an FDA-wide version and rebranded as Elsa.

Center for Devices and Radiological Health’s AI Pilot

The Center for Devices and Radiological Health’s (CDRH) AI pilot, CDRH-GPT, is reportedly buggy and has problems uploading documents and allowing users to submit questions. This raises concerns over the reliability of the tool and its ability to accurately aid in critical decision-making processes.

Conclusion

The rollout of Elsa, the FDA’s new AI tool, has sparked concerns over its rushed development and potential inaccuracies. Staffers have expressed worries over the lack of guardrails for the tool’s use and the potential consequences of pushing ahead with its implementation without proper consideration. As the FDA continues to explore the use of AI in decision-making processes, it is essential that they prioritize accuracy, reliability, and careful planning to avoid mistakes that could compromise public health.