I received an update spring newsletter from the WA Medical Commission, openly advertising their support of the “Brave New World” of Artificial Intelligence in Healthcare.
So what does the Brave New World by Alduous Huxley describe?
Brave New World is set in 2540 ce, which the novel identifies as the year AF 632. AF stands for “after Ford,” as Henry Ford’s assembly line is revered as god-like; this era began when Ford introduced his Model T. The novel examines a futuristic society, called the World State, that revolves around science and efficiency. In this society, emotions and individuality are conditioned out of children at a young age, and there are no lasting relationships because “every one belongs to every one else” (a common World State dictum). Huxley begins the novel by thoroughly explaining the scientific and compartmentalized nature of this society, beginning at the Central London Hatchery and Conditioning Centre, where children are created outside the womb and cloned in order to increase the population. The reader is then introduced to the class system of this world, where citizens are sorted as embryos to be of a certain class. The embryos, which exist within tubes and incubators, are provided with differing amounts of chemicals and hormones in order to condition them into predetermined classes. Embryos destined for the higher classes get chemicals to perfect them both physically and mentally, whereas those of the lower classes are altered to be imperfect in those respects. These classes, in order from highest to lowest, are Alpha, Beta, Gamma, Delta, and Epsilon. The Alphas are bred to be leaders, and the Epsilons are bred to be menial labourers.
So first of all, these cowards can shove their “Brave new World” Ideology. IT IS ANTI HUMAN and literally satanic. The degradation of everything it is to be human should not be advertised by a medical board! You are openly saying you want to brainwash children from birth and create a slave caste system? This lady should resign!
These people at the medical boards still need to be prosecuted for Genocide, violating Nuremberg Codes and murdering and maiming hundreds of thousands of Americans while pushing the Covid 19 weapons of mass destruction on the population. Do what the New Zealand whisteblower did, follow the trace of how many people got murdered by each provider from the shots and how many debilitated. Prosecute accordingly, if ever we get rid of the corrupt judicial system that has no interest in justice and is owned by the deep state system.
Furthermore, lets look at the populations live blood and show undisclosed self assembly nanotechnology that has turned people without their consent into Cyborgs.
This is what these medical boards stand for - Transhumanism - Control through the Healthcare system, the extermination of the human species and free will and the fusion of humanity with AI. How does AI create the perfect health profile for patients? Via intrabody area network sensors and nanotechnology, the very thing we see in the live blood. Its called Precision Medicine, of course advertised as great anti cancer possibilities - nonsense, that is the storefront you are supposed to buy into.
Artificial Intelligence and Nanotechnology are two fields that have been instrumental in realizing the goal of Precision Medicine – tailoring the best treatment for each cancer patient. Recent conversion between these two fields is enabling better patient data acquisition and improved design of nano-materials for precision cancer medicine.
Nanomaterials have contributed to the evolution of precision medicine, throughout all of the medical stages. New omics collection technologies, such as single-molecule nanopore sequencing, enable fast and sensitive single-molecule detection along with longer sequence read length, thus maintaining genetic context.[4] Diagnostic assays based on nanosensors allow biomarker detection in femtomolar concentrations as well as scanning for multiple disease biomarkers simultaneously in liquid biopsies (blood, urine, saliva) and in cell cultures.
The WA Board Chair describes an event that was hosted by the notorious Federation Of State Medical Boards, an un-elected non governmental body that was instrumental in threatening physicians that disinformation about the C19 bioweapon would result in suspension of their license.
FSMB should have been prosecuted for its illegal coersion and threat of clinicians and the mass genocide that resulted - empowered by sheepish clinicians complying instead of fighting for patients rights and their hypocratic oath. This compliance resulted in the C19 bioweapon holocaust that will see its full mass depopulatin effect this and next year. Now this New World Order organization is peddling the Transhumanist Agenda, as Medical boards glady comply - at least in my state.
The doctors have not yet figured out that not only are they part of the most diabolical satanic agenda ever to threaten mankind, but that their jobs are planned to be phased out eventually, so the AI robots get to take care of the Cyborgs.
As you can see indoctriaton in AI literacy will already start with medical students. I bet adoption of AI recommendation will be linked to pay for performance metrics, meaning part of their pay depends on this. This is why I proudly wear my T -Shirt - “My Soul is not for sale”.
The Federation of State Medical Boards’ (FSMB) Symposium on Artificial Intelligence in Health Care and Medical Regulation was held on Wednesday, January 17, 2024, at the Hamilton Hotel, Washington, DC.
The meeting was attended by 133 individuals, including members and staff of state and territorial medical and osteopathic boards, representatives of the health technology sector, the legal profession, venture capital, government and several partner organizations.
Opening Keynote Speaker
The opening keynote speaker was Jeffery Smith, MPP, Deputy Division Director, Certification and Testing Division in the Office of Technology at the Office of the National Coordinator (ONC) for Health Information Technology in the U.S. Department of Health and Human Services. He shared a brief primer about the ONC and its statutory role in the federal government, noting that the Government Accountability Office (GAO) had looked at clinical applications and administrative applications of AI as early as 2020. Mr. Smith’s presentation highlighted the potential benefits and challenges of AI in healthcare, including the possibility of widespread harm by misuse or misapplication of AI.
Mr. Smith spoke at length about the Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule, which will become effective on January 1, 2025, noting that the ONC’s focus on transparency has been characterized as a fundamental first step towards governance of AI in healthcare. The rule defines Predictive DSI (Decision Support Intervention) as “technology that supports decision-making based on algorithms or models that derive relationships from training data and then produce an output that results in prediction, classification, recommendation, evaluation, or analysis.” He also discussed the alignment of this rule with President Biden’s Executive Order on the Safe, Secure, and Trustworthy Developmnet and Use of Artificial Intelligence, issued October 30, 2023.
Mr. Smith concluded by addressing what medical regulators, educators and accreditors need to be focused on: basic AI education at the medical student level, with an overarching aim of ensuring basic AI literacy for all practitioners.
Perspectives Panel
The first panel featured a discussion with Marc Paradis, MS, Vice President of Data Strategy at Northwell Health, Marc Succi, MD, a radiologist at Massachusetts General Hospital-Harvard Medical School and Associate Chair of Innovation and Commercialization at Mass General Brigham, and Alya Sulaiman, JD, a Partner at McDermott Will & Emery. Frank Meyers, JD, FSMB’s Deputy Legal Counsel, served as moderator.
Panelists outlined current AI usage for ambient clinical documentation, diagnostic assistance, patient communication and care coordination. Challenges include physician reluctance to adopt recommendations, determining accountability, and avoiding introducing biases. The panelists highlighted both risks and benefits associated with greater incorporation of AI into the clinical setting, such as acceleration of diagnoses and individualized treatment models.
Collectively, the panelists reconzied the importance of regulatory approaches focused on addressing those harms we are most interested in avoiding, rather than adopting vague principles. Multiple panelists noted that the pace of AI advancement necessitates rapid adoption while addressing ethical concerns. Action items from this panel center on further testing of current AI tools in clinical settings and developing education accompanied by incentives to drive responsible clinician usage.
The panel also addressed the challenges of attributing liability with AI usage across developers, organizations, and clinicians, and discussed potential exploration of shared responsibility models. Panelists noted that distributive responsibility models of regulation tend to be less favorable than those focusing on shared responsibility and that such regualtory models would require clear measures on intended functionality, performance degradation risks and appropriate human oversight.
Panelists also observed that generative AI could be a solution to balancing workforce concerns and burnout by enhancing our ability to assess clinical risk and improve care. The panel also discussed that the transformative impact of AI on healthcare will take longer than 5-10 years and expressed a hope that there is an effort to look not only at AI that improves quality of health care but also improves access to care, as with rural health care.
The panel concluded by commenting on a newly introduced piece of state legislation that would require a licensed physician to oversee and review all uses of AI in healthcare.
Key Ethical Challenges for Medical Regulation
Jeremy Petch, PhD, Director of Digital Health Innovation at Hamilton Health Sciences and an Assistant Professor at the University of Toronto and McMaster University, reviewed what is meant by “Black Box” AI models, noting that a “black box” is an engineering term that refers to algorithms that are sufficiently complex that they are not easily interpretable by humans (and sometimes not interpretable at all). He noted that black boxes are often cited as a barrier to the adoption of AI in medicine, given that they impact clinician ability to trust the models.
Dr. Petch reviewed limitations of explainability: explanations are approximations, so they may produce only an imitative understanding of the functioning of black box AI. Interpretability, on the other hand, implies that a human can understand exactly how a model arrived at a specific output. He proposed a guideline for deciding when to require interpretable models, as opposed to merely explainable ones, suggesting that the decision should be based on the stakes of the clinical decision and how significant a tradeoff it offers in terms of interpretability and performance. If there are no meaningful differences in performance or accuracy between interpretable and explainable models, then an interpretable model should be used. However, if there is a significant improvement in performance with an explainable model, then its use may be justified. As the stakes of a decision are raised, the improvement in performance should also rise significantly for ongoing acceptance of an explainable black box model to be justified. If significant improvement in performance is not offered, then interpretability should be required.
Sara Gerke, JD, an Assistant Professor of Law at Penn State Dickinson Law School, discussed the potential liability for physicians using AI. Ms. Gerke analyzed scenarios where physicians face liability risks by not following current standard of care, even if AI recommendations are correct. She also discussed a recent study in the Journal of Nuclear Medicine which concluded that a juror may not be more inclined to assign liability if a clinician rejects AI-generated advice that causes harm in comparison to a situation where a clinician follows a non-standard of care approach that causes patient harm. She further discussed that beyond physicians, the law also creates liability risks for hospital systems purchasing and implementing AI tools, and developers involved in creating them.
Both panelists agreed that as AI becomes more widely adopted and complex, the standard of care itself may shift to incorporate AI recommendations. Legal frameworks could also change through case law or legislation like EU directives regulating AI as a product. Analysis of the European Union’s AI Act, effective in 2023, illustrates the following points: (1) broad directive gaps exist for healthcare AI; (2) legal causation issues are present; (3) there are unique software product challenges; and (4) evidentiary rule changes on algorithmic opacity are needed.
The panelists also commented that AI learning creates opacity posing trust issues for physicians and informed consent questions for patients, noting that full lifecycle improvements to training data controls, explainability and clinical trials could help address these AI challenges.
FSMB Ethics and Professionalism Committee
FSMB Board Member Mark Woodland, MS, MD, who chairs FSMB’s Ethics and Professionalism Committee, provided insight into the Committee’s deliberations on the issue of ethical use of artificial intelligence. He noted that the committee discussion reinforced the importance of key traditional ethical principles, such as beneficence, nonmaleficence, autonomy and justice. He noted that these principles will manifest in a policy to help state medical boards and physicians navigate the responsible and ethical incorporation of AI and stressed a need for (1) greater incorporation of AI knowledge in medical education, (2) increased emphasis on human accountability, (3) improved policies on informed consent and data privacy, (4) recommendations to proactively address responsibility and liability concerns, and (5) collaboration with experts. Dr. Woodland concluded by stating that by thoughtfully addressing the opportunities and challenges posed by AI in healthcare, state medical boards can promote the safe, effective, and ethical use of AI as a tool to enhance, but not replace, human judgment and accountability.
Small Group Breakout Sessions
Attendees broke out into small groups to discuss the following topics and issues related to AI adoption and use by licensed health care professionals: (1) What strategies should state medical boards use to keep pace with the rapid advancements in AI technology and its application in medical practice? (2) What steps can state medical boards take to ensure that AI tools trained on biased algorithms cannot be used by licensees? (3) Which use of AI tools in health care can result in patient harm and what are appropriate regulatory responses?
Among the observations reported at the end of the breakout sessions was that it is essential that the FSMB continue to track developments in AI and raise awareness among licensees and members of the public about its potential use. Discussions noted that in the near future state medical boards are going to see complaints about the misuse of AI and potential harms caused because a licensee either followed or did not follow the advice of an AI tool or algorithm. Addressing expert opinions in such cases was identified as an area where medical boards may need improved education and guidance. Attendees suggested that the FSMB could develop educational modules to keep licensees and member boards aware of AI from their perspective. Attendees also identified that requirements for AI-focused CME could be a means of helping licensees keep pace with AI.
Panel: Perspectives on AI in Healthcare and Reflections on the Day
The final panel, moderated by Eric Eskioglu, MD, MBA, Former Executive Vice President, Chief Medical and Scientific Officer at Novant Health, discussed generative AI and its role in health care and medical regulation. Panelists included Sarvam TerKonda, MD, Past Chair of the FSMB and Associate Professor of Plastic Surgery at the Mayo Clinic College of Medicine and Science, Jade Dominique James-Halbert, MD, MPH, a specialist in Obstetrics-Gynecology in Bridgeton, Missouri, who is Chair of SSM Health DePaul Hospital in St. Louis, MO, Alexis Gilroy, JD, partner at Jones Day, and Shannon Curtis, JD, Assistant Director of Federal Affairs for the American Medical Association.
Discussion began by noting that the evolution of AI has been rapid and that many medical professionals and medical boards are deficient in their knowledge of AI and how it is impacting the future of healthcare. Panelists emphasized that AI is already being used every day in our lives and will play different roles for different specialists and specialties.
The panelists debated whether it is best to think about regulating the technology, or use cases where AI plays a role in the practice of medicine. As a corollary, panelists shared their perspectives on whether the current regulatory framework is sufficient to address AI. The panel discussed the notion that the existing legal standard of care cited in existing regulations may suffice, with parallels to how a medical board handles liability, which was highlighted as a major concern for physciains in utilizing AI.
Where is this leading to? Well, in the name of progress it is leading to the Transhumanist Metaverse, the eradication of free will and the creation of digital twins.
The metaverse integrates physical and virtual realities, enabling humans and their avatars to interact in an environment supported by technologies such as high-speed internet, virtual reality, augmented reality, mixed and extended reality, blockchain, digital twins and artificial intelligence (AI), all enriched by effectively unlimited data. The metaverse recently emerged as social media and entertainment platforms, but extension to healthcare could have a profound impact on clinical practice and human health. As a group of academic, industrial, clinical and regulatory researchers, we identify unique opportunities for metaverse approaches in the healthcare domain. A metaverse of ‘medical technology and AI’ (MeTAI) can facilitate the development, prototyping, evaluation, regulation, translation and refinement of AI-based medical practice, especially medical imaging-guided diagnosis and therapy. Here, we present metaverse use cases, including virtual comparative scanning, raw data sharing, augmented regulatory science and metaversed medical intervention. We discuss relevant issues on the ecosystem of the MeTAI metaverse including privacy, security and disparity. We also identify specific action items for coordinated efforts to build the MeTAI metaverse for improved healthcare quality, accessibility, cost-effectiveness and patient satisfaction.
Of course AI can be used for dual use military purposes too. What is there to be concerned about?
Summary:
If these doctors still have a soul, they should be up in arms, not embracing this. The healthcare system has been weaponized and militarized already, and AI having access to all medical records will only make people a target. You cannot control AI - that is already known:
I am not interested in their Brave New World. Please remember, the military is using the healthcare system to get the public used to the concept of Cyborgs - the cost is the extinction of the divinely created human species.
Cyborg Soldier 2050: Human/Machine Fusion and the Implications for the Future of the DOD
The global healthcare market will fuel human/machine enhancement technologies primarily to augment the loss of functionality from injury or disease, and defense applications will likely not drive the market in its later stages. The BHPC study group anticipated that the gradual introduction of beneficial restorative cyborg technologies will, to an extent, acclimatize the population to their use.
To fully grasp how inexplicably prescient and prophetic Huxley was, search “ Aldous Huxley/ Mike Wallace Interview 1958”. Make sure you’re sitting in chair deep enough to not fall out of. Huxley not only predicted The Great Reset -Agenda 2030, but the chain of events ( created events) that would expedite ushering it in.
Oh yeah, Klaus Schwab didn’t coin the phrase,” You’ll own nothing and be happy”- Huxley did in 1931. The horrible character, Mustafa Mond, boasted to the The Savage,” You will own nothing, have no privacy and love your servitude”
The UN/WEF/WHO global elite megalomaniacal monsters are literally using Brave New World, 1984, Das Kapital, Mein Kampf,The Communist Manifesto, Chairman Mao, Machiavelli, Clockwork Orange, Soylent Green I, Robot and The Koran as a concomitant playbook.
Build Back Better aka The Great Reset aka Brave New World aka Tyrannical Transhuman Technocracy
And nary a peep from the Democrats or the Republicans
This next level of fraud, sickness, and murder, is behind the shield of AI. I made a podcast on the AI Deception and how they would use this to hide behind. They can just fake the AI and if anyone tries to hold someone accountable, well you cannot a put a computer program in prison can you? They are perfecting their crimes before our eyes.