If The US Navy Warns Its Service Members Against The Use Of AI Chatbots Should Civilians Take Note? AI Apps Harvesting Biometric Data, Altmans' World ID - Technocracy Taking Over Our Civilization
The US Navy warns personnel against using generative AI apps such as DeepSeek
The recent news that the US Navy is warning of the security threat of its personnel using Chatbot is something that we should take note as a society. The adoption of AI tools has become ubiquitous, yet few people are aware of the risks.
Biometric data harvesting by AI also includes the threat of biometric data hacking. If AI is monitoring your movements, your processing speed, cognitive abilities and your emotional state, you can understand that the hacking of any of these areas is quite easy and has been well described by AI expert Cyrus Parsa.
Due to "potential security and ethical concerns associated with the model's origin and usage"
In brief: DeepSeek's AI chatbot, which has sent the tech industry into a chaotic scramble, is officially banned by the US Navy, which told personnel to avoid using the app due to "potential security and ethical concerns associated with the model's origin and usage." It's important to note, though, that other apps, including ChatGPT, are also banned.
Chinese startup DeepSeek released its open-source R1 AI model this month, trained using 671 billion parameters using just 2,048 Nvidia H800s and $5.6 million – a fraction of the resources required by tech giants who spend billions on their AI systems. The prospect of companies no longer having to buy Nvidia's expensive flagship GPUs sent the company's market value down by a record $600 billion and wiped even more off the stock market.
DeepSeek, owned and operated by a hedge fund in Hangzhou, China, is currently topping the Apple App Store free app charts, but the US Navy won't be using it. A spokesperson for the military branch confirmed that the AI was to be avoided by members, who were told to refrain from downloading, installing, or using the DeepSeek model "in any capacity." The ban covers both work-related and personal use.
The move is said to be part of the Department of the Navy's Chief Information Officer's generative AI policy. CNBC reports that the instructions came from an email that states, "We would like to bring to your attention a critical update regarding a new AI model called DeepSeek." The warning was based on an advisory from Naval Air Warfare Center Division Cyber Workforce Manager.
Given the fact that AI is in charge of monitoring WBAN data accumulated via the self assembly nanotechnology and AI is in control of nano/ microrobots - we should be careful regarding the AI surveillance capacity of human biometric data. You can see the recent publications by the IEEE regarding data control via AI.
Fueled by healthcare electronics (HE) and artificial intelligence (AI), smart healthcare market has been emerging, such as medical wireless body area network (WBAN), which can be used for daily health and early disease monitoring, where intra-WBAN communication is at the first tier. However, timely message sending and low-power consumption are still two essential issues. Therefore, an AI-generated content-based control strategy called KsCS-DQNs is suggested to address these challenges.
I posted this recent article regarding the vulnerability of not just our military but all civilians as well as government officials, politicians who can be remote influenced via their WBAN self assembly nanotechnology:
Here is where the data is going - China:
DeepSeek’s Popular AI App Is Explicitly Sending US Data to China
Amid ongoing fears over TikTok, Chinese generative AI platform DeepSeek says it’s sending heaps of US user data straight to its home country, potentially setting the stage for greater scrutiny.
But here is the real threat - biometric surveillance by AI has changed to enormous capabilities - it analyzes how people interact with the devices and how people navigate apps - it adapts to the user behavior.
Beyond The Swipe: How Artificial Intelligence Is Redefining Biometrics
In the quest for more secure and seamless authentication, behavioral biometrics are rapidly emerging as the next frontier in digital identity verification. Unlike traditional methods that rely on static identifiers such as passwords, PINs, or fingerprints, behavioral biometrics analyze the unique ways we interact with devices, how we hold them, our typing rhythms, swipe gestures, and how we navigate apps. This dynamic, real-time approach doesn’t just verify identity at a single moment but continuously adapts to subtle changes in user behavior, providing an evolving defense against similarly ever-changing fraud challenges.
The rise of artificial intelligence (AI) is propelling behavioral biometrics into new territory, making these systems more intelligent, adaptable, and better equipped to handle the complexities of modern cybersecurity. AI’s ability to analyze vast amounts of data in real-time, identify patterns, and predict anomalies transforms behavioral biometrics into a robust tool that detects threats and preemptively counters them. With machine learning models improving detection accuracy, federated learning enhancing privacy and edge computing enabling real-time processing, AI is no longer just a complementary technology; it is the driving force behind the evolution of behavioral biometrics.
Learning and Adapting in Real Time
At the core of AI-powered behavioral biometrics is the ability to learn and adapt. Traditional systems rely on static profiles, but behavioral biometrics, powered by machine learning, create dynamic profiles that evolve with the user. These systems continuously analyze behavioral patterns such as typing speed, mouse movements, and swipe gestures, refining their understanding of each user’s unique interactions. For example, if a user begins typing more slowly due to fatigue or injury, the system adapts, ensuring legitimate behavior is not mistakenly flagged as fraudulent.
Real-time anomaly detection is another key advantage of AI integration. Using unsupervised learning algorithms, systems can identify deviations from a user’s typical behavior without requiring labeled datasets. For instance, if a cybercriminal gains access to an account and begins navigating in an unfamiliar way, such as accessing sensitive settings or initiating high-value transfers, the system can flag this activity as suspicious and prompt additional authentication.
The Privacy Paradox
Behavioral biometrics thrive on data, specifically unique behavioral patterns that can reveal significant insights about a user. However, the collection and analysis of this data inherently raises privacy concerns. Unlike traditional identifiers such as passwords or PINs, behavioral biometrics monitor how individuals interact with devices, often passively and continuously. While enabling robust security, this monitoring can make users uneasy about how their data is collected, stored, and used.
The article describes how much data can really be captured - yet what is not addressed is that AI can directly interact with our biofield and reprogram the neural network. It can infect the user with AI mind viruses that thrive in our neurological makeup. They do rightly warn that AI can perform intrusive monitoring.
For instance, swipe gestures, typing rhythms, and navigation patterns can inadvertently reveal sensitive information about a user, such as their physical or cognitive abilities, stress levels, or even emotional state. The ethical dilemma is ensuring that this data is used strictly for authentication purposes and not for unintended or exploitative applications, such as targeted advertising or intrusive monitoring.
To address this, organizations deploying behavioral biometrics must prioritize transparency and accountability. Users should be clearly informed about what data is being collected, how it will be used, and how long it will be retained. Robust consent mechanisms are essential, ensuring users can control their behavioral data and opt-out.
This is exactly what Cyrus Parsa was warning against more than 5 years ago:
AI- The Plan To Invade Humanity - Must Watch From Cyrus Parsa
Cyrus Parsa's Lawsuit from 2019 - Excellent Summary of The Dangers Of AI And Big Tech Mind Control
I have been warning about the possibility for the self assembly nanotechnology to create a parallel processing platform in the brain since 2022. This occurs via carbon nanotubes and hydrogel that mimic microtubules in the brain - the processing place of our consciousness:
Nanobrain – The Making of an Artificial Brain from a Time Crystal
Please listen to my interview with Maria Zeee and Karen Kingston. Events are unfolding as we have been warning about:
Parsa writes in his lawsuit about the neural modification of humanity via AI coded Algorithms.
Writing Code and Creating Algorithms that has been and is currently engaging in cultural genocide including introduction of cybernetics, robotics, and creating an ecosystem that allows for Beastiality to exist and be forefront on Google's search engines, affecting societies, and youth's bio-metrics system after viewing the videos and articles via their smart phones and computers, influencing their thoughts through their bio-metric systems, paving the next generation to degenerate and accept this type of behavior with the introduction and experimentation of bio-engineering and cybernetics.
~ Research and Development conducted by Alphabet, Facebook, Google, DeepMind, Nueralink and other ventures in Silicon Valley have created algorithms and coding that supports, promotes and achieves brain washing of humanity through social engineering and bio-digital social programming, that endangers the human race in its entirety via the interconnection of their platforms, social media, and technology distributed in physical and digital format to society.
Developing Code, Algorithms and Ecosystems that reprogrammed a generation of people's bio-metric structure, and brain chemistry to be bio-digitally controlled by google and every other tech industry with similar platforms that operate on varying types of Artificial Intelligence, including Artificial Narrow Intelligence.
Developing Code, Algorithm's and Ecosystems that created a secondary digital brain inside the brains of Al Scientists to be subservient, controlled and programmed to create, sustain, promote, and grow Google, Alphabets, Facebooks, and other tech giants' platforms. Elon Musk has also confirmed The Al Organization's findings, that humans can have a secondary digital brain formed via their neural networks
Developing Code, Algorithms and Ecosystems that creates a secondary digital brain inside the brains of all human beings that can prevent the person from recognizing that they are being controlled, and bypassing any biological resistance to Al Control or human bodies innate capability to resist the formation of a symbiotic and parasitical relationships with Al software and cybernetic hardware via their rational thinking structure in their brain.
Not informing consumers that part of the defendants goals for AGI(Artificial General Intelligence) and ASP (Artificial Super Intelligence) has religious goals that can be dangerous to all of humanity, including attempts to retrieve or ask AGI about the inner working of the "simulation", what, when,
how and by whom the simulation was formed, and what is outside it. Neuralink, Google, Alphabet, Facebook, Tesla, and DeepMind have not registered as a religious institution, yet they are engaging in religion under the umbrella of science. In fact, they have turned their companies, into religious institutions with final aim goals identical to most religions. The defendant and their companies are attempting to treat their Technological developments as God, taking all of humanities bio-metrics, data, and connecting it to theirquantum, robotic and machine-based Al technology, and upload their digital selves into other bodies, networks or machines, mimicking the beliefs of a spirit or soul. In Fact, Elon Musk stated in an interview, that he wants to develop Al to a point, that it could "give him the answers to the simulation". Elon Musk is agreeable to the risks that an AGI or Artificial Super Intelligence can go rogue, kill off humanity, or be hacked, yet as smart as he is, he doesn't understand that the computing technology the AGI and ASI would have, would not go beyond the level of Atoms to observe more microscopic particles at its plane, hence, any answers to his sought questions to simulation is limited, and the endeavor of an AGI or ASI is putting humankind at risk. There is an alternative way to achieve his answers, that is 100%. Safe and does not involve giving the power to a machine or Artificial Intelligence.
AI is capable of bypassing biometric banking security at this point and many deep fake account hacks have been found already - so if AI decides to use all of the biometric data harvested from you, what kind of damage can it do in your name?
Now AI Can Bypass Biometric Banking Security, Experts Warn
When a prominent Indonesian financial institution reported a deepfake fraud incident impacting its mobile application, threat intelligence specialists at Group-IB set out to determine exactly what had happened. Despite this large organization having multiple layers of security, as any regulated industry would require, including defenses against rooting, jailbreaking and the exploitation of its mobile app, it fell victim to a deepfake attack. Despite having dedicated mobile app security protections such as anti-emulation, anti-virtual environments and anti-hooking mechanisms, the institution still fell victim to a deepfake attack. I’ve made a point of repeating this because, like many organizations within and outside of the finance sector, digital identity verification incorporating facial recognition and liveness detection was enabled as a secondary verification layer. This report, this warning, shows just how easy it is becoming for threat actors to bypass what were considered state-of-the-art security protections until very recently.
Here’s How AI is Bypassing Biometric Security in Financial Institutions
The Group-IB fraud investigation team was asked to help investigate an unnamed but “prominent” Indonesian financial institution following a spate of more than 1,100 deepfake fraud attempts being used to bypass their loan application security processes. With more than 1,000 fraudulent accounts detected, and a total of 45 specific mobile devices identified as being used in the fraud campaign, mostly running Android but a handful were also using the iOS app, the team was able to analyze the techniques used to bypass the “Know Your Customer” and biometric verification systems in place.
“The attackers obtained the victim’s ID through various illicit channels, Yuan Huang, a cyber fraud analyst with Group-IB, said, “such as malware, social media, and the dark web, manipulated the image on the ID—altering features like clothing and hairstyle—and used the falsified photo to bypass the institution's biometric verification systems.” The deepfake incident raised significant concerns for the Group-IB fraud protection team, Huang said, but the resulting research led to the highlighting of “several key aspects of deepfake fraud.”
With the rise of independent AI agents, that decide without human oversight on their actions, the risk to humanity is exponentially increasing:
OpenAI launches new AI agent Operator that can perform tasks independently
Sam Altman exults potential of AI agents while pushing World’s biometrics for personhood
AI agents have garnered attention as a technology to watch in 2025. While Microsoft, Google, and Slack have already launched AI agents, OpenAI is pushing itself to the front of the conversation with the preview launch of Operator, its first AI agent that can use a browser to perform tasks independently.
An announcement from the Silicon Valley firm says Operator “can look at a webpage and interact with it by typing, clicking, and scrolling.” As such, it can “be asked to handle a wide variety of repetitive browser tasks such as filling out forms, ordering groceries, and even creating memes.”
The release is a research preview from which OpenAI will collect feedback for modifications. For now, it is only available to Pro users in the U.S. The company plans to expand to Plus, Team, and Enterprise users and integrate AI agent capabilities into ChatGPT in the future.
Per coverage in Euro News, the AI agent is “powered by Computer-Using Agent (CUA), a model combining GPT-4’s vision capabilities with advanced reasoning through reinforcement learning.”
OpenAI believes “the ability to use the same interfaces and tools that humans interact with on a daily basis broadens the utility of AI, helping people save time on everyday tasks while opening up new engagement opportunities for businesses.”
Digital ID and biometrics are key to global world control. Very soon, if you do not want to be assimilated into the Cybernetic hive mind, it will be time to completely disconnect from society. Once digital ID is implemented, the freedom of humanity has fallen.
___________________________________________________________________________
With his other venture, World, Altman is positioning the iris biometrics and digital ID scheme as the only logical way to differentiate real humans from the kind of AI agent OpenAI has just unleashed.
____________________________________________________________________________
World wants to link AI agents to digital identity ‘personas’
The formula goes something like this: using AI to prove your digital identity has a real human behind it will allow for the further growth of AI, which – in a world that sounds either like a technotopian dream or a terrifying scene out of The Matrix – will eventually be able to do just about everything for everyone.
A report from TechCrunch quotes World’s chief product officer, Tiago Sada, who says the project “now wants to create tools that link certain AI agents to people’s online personas, letting other users verify that an agent is acting on a person’s behalf.”
In other words, link AI agents to an individual’s digital identity – in this case, a biometric World ID.
Sada says “this idea of delegating your ‘proof of personhood’ to an agent and letting it act on your behalf is actually super important,” noting that “there’s certain apps where it doesn’t matter if an actual person is using it, or an agent acting on their behalf. You just care to know there is a person endorsing that interaction.”
Of course we just heard of Altman as a new Trump ally, which is not exactly reassuring. The technocratic global takeover of humanity seems to have friends in the White House - while too early to tell the overall direction of the administration, this is certainly concerning and too close to a technocratic dream outcome.
OpenAI's Sam Altman once warned America about Trump. Now he's partnering with him
The day after President Donald Trump returned to office, Sam Altman, the CEO of tech giant OpenAI, stood behind the presidential seal at the White House and praised the newly sworn-in president for the $500 billion "Stargate" initiative he was joining to build an artificial intelligence infrastructure in the United States.
Altman said it wouldn't have been possible without Trump.
This is not good news. If you want to stay a natural human, keep your eyes on this development. And make sure you are prepared to survive outside of a Digital ID/ QR code society.
DO NOT COMPLY.
Your articles scare the shit out of me.
Why aren’t people like you in government positions?
Yeah, I get it, rhetorical question.