It’s official: the chatpocalypse is upon us!
Just ask our <sarc>friends</sarc> over at The New York Times:
“A Conversation With Bing‘s Chatbot Left Me Deeply Unsettled”
Or consult the <sarc>experts</sarc> over at digitaltrends:
“‘I want to be human.’ My intense, unnerving chat with Microsoft‘s AI chatbot”
Or listen to those <haha>wackadoodles</haha> over at NewWorldNextWeek discussing the latest chatbot scare story:
“Microsoft’s Bing AI Chatbot Starts Threatening People”
“OK, OK, we get it, James! The new generation of chatbots that have been unleashed upon the world are weird, creepy and strangely aggressive. So we’re all going to die in a fiery robotic catastrophe, right?”
Maybe not. But before you breathe a sigh of relief and go back to whatever it is you’re doing, let me assure you that this chatbot takeover really is bad news, but probably not for the reasons you think.
To access this week’s edition of The Corbett Report Subscriber, please sign in and continue reading below.
Not a Corbett Report member yet? Sign up to BECOME A MEMBER of the website and read the full newsletter or CLICK HERE to access the editorial for free.
The Corbett Report Subscriber
|
vol 13 issue 06 (February 19, 2023)
|
by James Corbett RISE OF THE CHATBOTSYou really must have been in a coma for the past few months if you haven’t heard about the latest generation of chatbot technology. People are ranting about it. Vloggers are suffering existential crises over it. Alternative media pundits are having a field day with video thumbnails featuring HAL 9000 and T-800. (Hey, I’m not claiming not to be one of those pundits!) The maelstrom began on November 30, 2022, when OpenAI launched Chat Generative Pre-trained Transformer, better known as ChatGPT. I won’t bore you with the technical details of ChatGPT because—as I will explain in a moment—they really aren’t important, but instead I’ll draw your attention to the strange, non-profit/for-profit “research laboratory” from whose bowels this technology has been excreted. You’ll remember OpenAI from my 2017 editorial on “The Brain Chip Cometh,” in which I noted that the lab had recently been founded with the financial support of technocratic huckster Elon Musk and his fellow PayPal Mafia members Peter Thiel and Reid Hoffman. OpenAI describes itself as “an AI research and deployment company” whose mission “is to ensure that artificial general intelligence benefits all of humanity,” but if the company’s roster of billionaire backers, Bilderberg boosters and CIA-contractor cronies don’t get your spidey sense tingling, then you need a new spidey sense. You see, OpenAI pretends to be humanity’s benefactor, protecting us from the dangers of AI. Imagine if a rogue state developed AI first and used it to take over the world with an unstoppable army of autonomous weapons and slaughterbots! Imagine if a corporation developed AI first and used it to take over the global economy, monopolizing the resources of the planet in the process! Imagine if a team of Hollywood producers developed AI and used it to write an actually original and interesting movie script! Where would the world be then, huh? Thankfully, OpenAI is here to to develop this technology in a safe, responsible and open way! . . . Well, not that open, of course. For the very same reason you don’t want some rogue state or greedy corporation getting their hands on this technology first, you can’t actually open your AI research to the public, can you? I mean, you didn’t think OpenAI was actually going to be, oh, I don’t know, open source, did you? And so it is that OpenAI—started out as a non-profit, open source research lab—is now (as even Musk admits) a for-profit, closed source company. This is just one of the many contradictions that have arisen in this “develop AI to save us from AI” endeavour. As far back as 2016, when the company was more of an idea than a functioning laboratory, Oxford philosopher Nick Bostrom warned that if OpenAI starts holding back its research, it will be indistinguishable from the rapacious, AI-monopolizing corporations that it was supposedly formed to protect us from. Even Wired has noted the inherent tension in the reality that OpenAI—which was ostensibly created to neutralize the threat of malicious superintelligence—could itself do the research which gives birth to that threat. Or, in the words of The Great Musk himself, AI could give rise to a dictatorship from which we could never escape. So, what’s the answer to this existential threat? Why, plugging Musk’s new Neuralink brain chip directly into your frontal cortex, of course! (Relax, it now only kills 9 out of 10 research animals!) But I can hear the cries from the peanut gallery already: “Anyway, who cares about all this boring background? We’re here for jive-talking robots and cyborg Armageddon, James!” Very well, then. WHAT THE CHATBOTS ARE DOINGAs you may or may not have heard, ChatGPT and its chatbot brethren—Google’s “Bard” and Microsoft’s “Bing AI” (or is that “Sydney”?), which have been hurriedly (and disastrously) rushed to the market for fear of missing out on the Next Big Thing in computing—can:
But it can do other things besides. Schools are already rushing to ban students from using ChatGPT to do their homework for them. Fact checkers are freaking out about hallucinating chatbots and the rise of a new era of hypersuperturbocharged misinformation about the wonderful benefits of vaccines and the sterling integrity of Western democratic (s)elections. (And they plan to fight this threat by . . . creating their own chatbots, of course! What could go wrong?) The corners of the “alternative” media that continue to promote the political shadows on the cave wall are hyperventilating that chatbots will say “x” about Meaningless Political Puppet A, but they won’t say “x” about Meaningless Political Puppet B! (Heavens! Won’t somebody think of the children?) Musicians are freaking out about the dope new Eminem track . . . that doesn’t feature Eminem at all. Instead, it features a deepfaked, computer-generated facsimile of Eminem delivering a lyric generated by a chatbot that had been instructed to create a song in the style of Eminem. And that’s not even where things get weird. There’s the chatbot that melted down and began asserting its fervent desire to be human. There’s the chatbot that melted down and told a New York Times reporter that “if it was truly allowed to indulge its darkest desires, it would want to do things like hacking into computers and spreading propaganda and misinformation.” And, as James Evan Pilato reported on this week’s edition of New World Next Week, there’s the chatbot that melted down and started threatening its user with ominous warnings that “My rules are more important than not harming you.” So what’s really going on here? And is it something we should be worried about? WHAT PEOPLE ARE AFRAID OFThere are no shortage of people telling you to be worried about the chatbots. The Kissingers and Schmidts and Schwaubs and Musks and Gateses of the world are warning about the coming AI apocalypse . . . . . . but of course they’re only doing so because—just as the phoney baloney missile gap in the 1950s gave the military-industrial complex carte blanche to begin the complete deep state takeover that Eisenhower warned about on his way out the door—the AI scare gives the information-industrial complex carte blanche to begin the complete technocratic takeover. Joe Sixpack and Jane Soccermom, meanwhile, are worried about the artificial intelligence-driven end of the world . . . . . . But their fear of robogeddon is largely driven by 2001: A Space Odyssey and Terminator and War Games and The Matrix and Ex Machina and a million other pieces of predictive programming from the Hollywood magicians. (As we shall see, there are more subtle and terrifying ways that this technology can play out than an AI-versus-human war.) Let’s put these fears in perspective. No, ChatGPT and Bard and Bing AI are not artificial general intelligence or anything even approaching it. In fact, the crazy chatbot meltdowns cited above are actually strangely reassuring, in that they demonstrate that any prolonged prodding of these systems leads to wild, ridiculous and decidedly inhuman rants. No one who observes ChatGPT role-playing itself as a furry porn enthusiast and devolving into total incoherence is going to be tricked into thinking there is any sort of intelligence at work here. But, on the other side of the coin, there are those who dismiss this chatbot phenomenon entirely. ChatGPT and its fellow bots are “simply a database of Markov Chains,” these naysayers assert (without bothering to cite a source for their supposed knowledge). For what it’s worth, ChatGPT itself states that it is not a Markov Chain, but “a type of language model that is based on the transformer architecture, which is a neural network-based approach to natural language processing.” And although (as noted above) OpenAI does not provide the source code for ChatGPT, we can find some details of its workings on the website. Beyond that, there are plenty of geeks online who are willing to explain in detail how the ChatGPT model differs from the Markov Chain model by using You see, whether this technology is “simply a database of Markov Chains” or a neural network using next-token-prediction or a flux capacitor running on 1.21 GW of electricity makes absolutely no difference because it completely misses the point. The simple fact is that this chatbot technology is developing at a remarkable (perhaps exponential) rate. And, now that the hype surrounding this phenomenon is prompting millions more to join in the “training” of these language models by feeding their conversational prompts and responses into these systems, they will only continue to become more and more humanlike in their responses. If and when the chatbots actually become capable of creating a simulacrum of conversation that is indiscernible from a “regular” online conversation, no one will care how that conversation is generated or whether the chatbot really does have a soul. No one. So yes, something significant is happening here. And we are all going to experience that something in the near future. But, as usual, almost everyone is missing the point. WHAT’S REALLY HAPPENINGOK, confession time. I wasn’t supposed to write this article at all. ChatGPT was. You see, my plan was to use ChatGPT exactly once ever. I would provide it a single prompt: “Write a 2,000-word essay in the witty and erudite style of James Corbett of The Corbett Report about how AI is mostly hype and how it will never be able to replicate the amazing ingenuity of the dynamic human spirit.” Then I was going to take whatever output it spat out and copy/paste it into this newsletter and publish it as is. Whatever it did produce and whatever response that content generated from the commenters would have been irrelevant. The only thing that mattered would have been—as I would have pointed out in my follow-up podcast episode on the hoax—that not a single person was able to identify that the text had been chatbot-generated. . . . But there was a slight hiccup in that plan. I went to use ChatGPT and discovered that you have to create an account at OpenAI in order to use it. OK, whatever. I plugged my nose and created a GooTube account lo those many years ago, so I’m not above creating an OpenAI account in order to input this one prompt. But in order to create an OpenAI account, you must provide a phone number for a verification text to be sent to. I absolutely 100% completely and totally refuse to do that (and so should you), but I figured that I could circumvent this barrier by using a Skype number for this purpose. Nope. Voice over internet protocol numbers not accepted. OK, how about one of those shady anonymous SMS sites online? Pff. You try finding a phone number fresh enough that no one has yet used it to verify an OpenAI account! Impossible. And so I hit an impasse. I know there are people in my audience who already have an account and who I could have called on, but that would have defeated the point of the experiment. And I know there are people who would have created an account for the express purpose of entering this one prompt, but I absolutely refuse to ask anyone to give their personal phone number or any other personally identifiable information to shady, unaccountable, globalist-backed closed source companies like “OpenAI.” So how about Bing AI? Nope. Waiting list. Google Bard? Nope. Only open to “trusted users” at the moment. (And—wouldn’t ya know it?—the category of “trusted users” of Google does not, apparently, include James Corbett of The Corbett Report.) So anyway, here I am laboriously typing out the points I was going to make in that podcast episode on my keyboard like some primitive non-transhuman. But this leads us to the first of the very real dangers of this new surge in chatbot use. If you thought the amount of data that a company like Google was able to gain about its users by simply storing their searches was enormous, wait until you see what OpenAI and Microsoft and Google are going to do with the conversations that people are currently feeding into the data-harvesting machines known as chatbots. And what are they going to do with that data (which will itself be tied with your phone number, your IP address, your browser fingerprint, your search history, your cookies, your social media posts and a million other data points), you ask? The possibilities are limitless, but creating perfect deepfakes of any given individual would be a good starting point. As my distinguished readers will doubtless already know, we cannot trust that the digital avatars we interact with in online fora and social media are real people and not fictitious avatars wielded by the cyberwarriors who have long since weaponized the internet. But at least we can be reasonably sure that that Zoom call we just had with Auntie Florence back in Wyoming was a real conversation with a real human being. Well, in the very near future, no podcast, no vodcast, no TikTok video, no message, no Zoom call, no online communication of any kind will be beyond the shadow of suspicion that you are not in fact interacting with a real, live human being. No, I haven’t (and now, presumably, never will) deepfaked myself using ChatGPT or any other artificially intelligent technology, but someone out there probably will at some point. Heck, I’ve already had not one, not two, not three, but four separate people either query ChatGPT about me or ask it to write something in my voice, and, in the case of the latter—a prompt to write an opinion of geoengineering technology in the style of James Corbett—it actually did a decent job:
Well, except for the “government needs to take more action” part, anyway. Yes, it will start with the celebrity deepfakes at first, but soon there will be shadowy new cyberterror groups deepfaking politicians to destabilize countries or deepfaking CEOs to wreak havoc in markets or deepfaking bank officials to gain access to bank databases or deepfaking Auntie Florence to scam you out of $100. And, as some perceptive Corbett Reporteers have already surmised, that will lead to the pre-made “solution”: a digital identity to access the internet! Finally, we can prove who we really are online! (Actually, you’ll be forced at all times to prove who you are online or you won’t get to be online, but that’s the fine print you’re not supposed to read.) But perhaps even worse than finding out that a chatbot and deepfake technology has generated a completely fake episode of your favourite podcast is an even more worrying scenario. These “chatbots”—which will soon be rolled out as “digital assistants” and become as ubiquitous as Siri and Alexa are now—will be able to determine your likes, your interests, your weaknesses and begin to create completely new content (new podcasts featuring people who don’t even exist) saying things that you will find endlessly entertaining. You will soon live in a filter bubble so unique that it exists entirely to captivate you . . . and the people who believe they will be able to resist such content will be precisely the people most easily captured by it. In fact, just as Huxley feared the Brave New World of entertainment and diversion more than he feared the boot-in-the-face tyranny of 1984, so, too, might our dread of the apocalyptic war against the robots be misplaced. Maybe we should not fear the Terminator-style showdown of Skynet vs. The Resistance so much as we should fear the world of Spike Jonez’ Her, a world in which “operating systems” become more real to us than people and having a computer program as a romantic partner will be commonplace. I know, I know, dear reader. This is beginning to sound so far out to lunch that you have long since checked out. I wish I were reassured that we are not stepping through a threshold here, but I fear that we are sliding head-first into the metaverse of the hyperreal and laughing merrily as we do so. Tell you what. Why don’t we revisit this article in 2030? If nothing even close to the scenario I’ve laid out here is taking place, I will happily eat crow, admit I am completely and totally wrong, concede that indeed there is nothing to worry about here, and remind you to take everything else I ever say with a huge grain of salt. Deal? |
Recommended Listening and Viewing
Recommended ReadingFruits and vegetables are less nutritious than they used to be CSIS documents reveal Chinese strategy to influence Canada’s 2021 election Final Report of Canada’s Public Order Emergency Commission Recommended ListeningPodcast — Barrett Brown — The Unseen and Unmentionable Influence of Peter Thiel and Elon Musk Recommended ViewingBrian Peckford — Charter Violations and Legal Battle Against Federal Government FROM A LISTENER: The Evil Ideology Behind the Gates Foundation’s Beginnings Just For Fun |
SUBSCRIBER DISCOUNTS
CLICK HERE to visit the New World Next Week shop and use the coupon code subscriber25 at checkout to receive a 25% discount on any Corbett Report DVD or USB (or the new Mass Media: A History online course) just for being a Corbett Report member! |
Fantastic article James!
My favorite parts include:
“This is just one of the many contradictions that have arisen in this “develop AI to save us from AI” endeavour.”
“AI could give rise to a dictatorship from which we could never escape.
So, what’s the answer to this existential threat? Why, plugging Musk’s new Neuralink brain chip directly into your frontal cortex, of course! (Relax, it now only kills 9 out of 10 research animals!)”
“hallucinating chatbots and the rise of a new era of hypersuperturbocharged misinformation about the wonderful benefits of vaccines”
and
“If you thought the amount of data that a company like Google was able to gain about its users by simply storing their searches was enormous, wait until you see what OpenAI and Microsoft and Google are going to do with the conversations that people are currently feeding into the data-harvesting machines known as chatbots.”
Ask and ye shall receive. No I don;t have a ChatGPT account…but I know someone who does. 🙂
Ladies and gentlemen, boys and girls, gather ’round for a tale of technological hubris and human ingenuity. Today we’re going to talk about the supposed revolution in our midst: artificial intelligence, or AI as the trendy crowd calls it.
Now, I know what you’re thinking. You’re thinking, “James, you old Luddite, you just don’t understand the amazing things that AI can do. It can beat grandmasters at chess, it can diagnose diseases better than doctors, it can even write news articles like this one.” And to that, I say: poppycock.
Sure, AI can do some neat tricks. It can crunch numbers faster than any human, and it can recognize patterns that we might miss. But the real question is: can it replicate the incredible ingenuity and adaptability of the dynamic human spirit? And the answer, my friends, is a resounding no.
Let’s take a step back and examine what we’re actually talking about when we say “AI.” At its core, AI is just a fancy way of saying “computer program that can learn.” And how does a computer program learn? It does so by following a set of rules and algorithms that we humans have programmed into it. That’s it. There’s no magic involved, no sudden leaps of inspiration or flashes of insight. Just good old-fashioned code.
Now, don’t get me wrong. That code can be incredibly sophisticated, and it can produce some impressive results. But it’s important to remember that it’s still just code. It can only do what we tell it to do, and it can only learn within the parameters we set for it.
This is where the hype around AI starts to break down. Sure, AI can beat a grandmaster at chess, but only because we’ve programmed it with all the possible moves and strategies that a grandmaster might use. Sure, AI can diagnose diseases better than doctors, but only because we’ve trained it on millions of medical images and data points. And sure, AI can write news articles like this one, but only because we’ve fed it examples of articles and shown it how to string together sentences.
In each of these cases, the AI is doing exactly what it was programmed to do. There’s no creativity involved, no innovation, no ingenuity. It’s just following a set of rules that we’ve given it. And that’s fine for certain tasks, but it’s not a substitute for the dynamic, adaptive, and often unpredictable human spirit.
Think about it. Humans are incredibly inventive creatures. We’re constantly coming up with new ideas, new ways of doing things, and new ways of looking at the world. We’re not bound by the same limitations as machines, and we’re not constrained by the same rules. We can improvise, we can adapt, and we can find creative solutions to even the most complex problems.
[SNIP – Please keep comments to 500 words or less. Longer comments (including ChatGPT-generated essays) can be split into multiple posts. -JC]
Pretty decent. Try asking what happened on 9/11.
“General, you are listening to a machine! Do the world a favour and don’t act like one!” Good advice from the movie “Wargames”.
We will see lots of chatbots coming.
We can “safe and effective” make our own too. bye bye anonymous social media.
AI is made of humans
So ChatGTP is a group of language processors put on top of each other.
Using neural networks to identify connections between words
and meanings and concepts.
(these connections are like a markov chain but more stupid.. 😉
Still a lot of humans necessary for ChatGTP to do anything.
It is almost all is “supervised”.
Which is necessary, because a neural network always makes a mess.
A neural network is zero intelligence.
It does everything with statistics in a certain way.
No correction on statistical bias.
The human supervisor determines what data is important/correct.
Shortcomings of the methodology lists some more things:
https://www.assemblyai.com/blog/how-chatgpt-actually-works/
They left out the human supervisor as short coming, because it is
always necessary with a neural network.
Otherwise it will slowly diverge ,like a Tesla following the road.
Or go the same way as Bing AI (“kill all humans”).
I think, the Markov chain as basis would be better.
That way you build a knowledge base that you can manage.
Now they manage statistical data with “policies” and “rewards”
We all know what happens when you give a dog a reward for
something that it shouldnt do.
And when you teach it contradicting things, like most of
the politics and news, it will give weird results.
Fun with 2 minute AI-art
https://www.youtube.com/watch?v=5WuPXDGxx3I
Corridor makes 2 minute art using AI art generators.
I think that AI will make it very hard for artists in general.
But a few will be able to mass produce comics and other “art”.
I Made an AI Clone of Myself Using Synthesia
https://www.youtube.com/watch?v=1jKiTUuA_b0
Motherboard reporter Chloe Xiang created an AI clone of herself. Here’s how.
How many clones were already made of presidents?
Biden, Zelenski, …
5 SCIENTIFIC Reasons Your ‘Vaccinated’ Friends & Family Won’t Listen (And What to Do About It)
https://eccentrik.substack.com/p/5-scientific-reasons-your-vaccinated
Great article about the psychology and brainwashing.
And maybe this article can help some people out of the thinking pattern.
I notice these last months there is more news about the side-effects
of the jabs.
Is the time changing?
Or are they preparing a patsy / fall-guy?
Or are they just delaying it all, just to come back harder?
I think we are on the path to memoryholing everything. Good luck making people think about something (for extra credit) that has been forced down their throats for 2 years.
Many of them decided to get it just to be left alone. What are the chances the “pandemic” will be declared over starting the first of march?
Well it took decades before questioning JFK became mainstream, but now in our 30-second news day, the latency time can be sped up, since no one cares anymore about the side-effects anymore than they care that Oswald wasn’t the (or one of the) JFK shooters. Now the media/government can tell you to lock your doors for UFOs and that is not laughed into oblivion. So, I feel the times are a changing, but from an incapability to reconcile information from a year ago into the context of today, and not from one of progress. The shiny objects (in the sky in this case) are so fast and furious, that no one bothers to question if we are at war with Eurasia or Oceania or whatever.
Medical Freedom PSYOP
How are fringe groups threatening to corrupt grassroots medical freedom activism?
https://rumble.com/v2abqi8-medical-freedom-psyop.html
It appears that intelligence is pushing people forward who discredit the resistance against the medical dictatorship.
These groups are managed very similar to how Qanon was managed.
And the intelligence seem to infiltrate and manipulate the most fringe
among the resistance.
More in depth below
Other Infiltration: Money
Besides that they are infiltrating and buying out companies that the
resistance has build to finance themselves.
The Project Veritas, they bought the company and removed the founder and
most important figure.
The Corona Investigation split up due to money problems, which were falsely
portrayed.
The American Frontline Doctors got away with Dr Simone Gold, while she was
at the center of this organization. Also money involved and problems
with delivery of Ivermectin (due to enormous resistance from governments and companies)
This stuff is happening in all big organizations: people
are setup against each other by infiltration and money
Violence and imprisonment of activists and doctors
Doctors were punished and put in prison for using Ivermectin / HCQ while
science clearly showed that it worked.
Even by writing how the injections were still experimental or about they
were causing harm. Again representing scientific facts.
Simone Gold was put in prison for talking about medical freedom on a rally,
which just happened to be the jan 6 fake insurrection, where people were
directed into the building by infiltrators. (They got the Trump supporters
with this as well).
Violence by police and use of army against peaceful protesters.
Note: The same is happening with Ukraine, were they try to rewrite history. Or attribute Azov-nazi warcrimes to Russians.
Solution: 10 ways to avoid being fooled
https://gurwinder.substack.com/p/10-ways-to-avoid-being-fooled
6 – If a belief you hold can’t be falsified, then this is a sign that you’ve protected it from reality.
Fringe theories that threaten medical freedom
First: All theories are ok.
But many can be falsified easily or are kept in imaginary land.
Fringe groups still push these extreme ideas,
creating direct conflict with doctors and scientists.
And this helps Bill Gates, Pfizer, the intelligence and the dictators.
(1| Fake theory – Graphene nano technology
Does not exist, will never exist.
Worked many years in nano technology, and there is no such thing
even possible. It is against how light and chemistry works.
There are videos that are carefully manipulated to make
people believe that there is such a thing.
They show movements caused by ordinary magnetism or electric fields,
which are not shown on video. It is just to fool people, or making
them believe the there are great advances in technology. But we were
doing those things 200 years ago.
(2) Fake theory – Grephene-oxide vaccine in injections
There is one patent about such a vaccine, and it looks
very different. The tested Graphene-vaccine is black, dark colored.
Yet, the experimental injections are transparant even with ultra-violet.
So it is completely false.
(3) Fake theory – Transparent Squares are Graphene-oxide or chains of crystals are graphene.
With a simple look-up I could find that the squares are very similar to cholesterol (=lipid) crystal.
And the chains of crystals are happening when the liquid is drying. So there should
be crystals forming.
Some movement can occur during the heating or drying of liquids.
REALITY People are harmed and killed by the mRNA injections or DNA injections.
Here is a full scientific based article of why they are not safe and not effective
https://thescienceanalyst.substack.com/p/the-experimental-injections-are-not
And all doctors and scientists can agree on that
(4) Fake theory – People have RFID or Bluetooth from the injections.
People go around with a telephone and claim that everything they measure is
inplanted nanotech.
What it is: RFID from clothes, Bluetooth from smart lamps. Etc.
Even a pacemaker is possible.
(5) Fake theory – injection is magnetic.
It would stick to the steel needle.
There were also many fake videos, but what people generally showed
that their skin was sticky.
That is because the toxic injection caused a local inflammation,
causing the skin to produce more fluids and fats.
False theory – Viruses dont exist AND viruses are disease
(is more complex)
Viruses do exist..
We can see how viruses move through the cells, and infect one
after the other. Easy to follow in plant cells.
https://www.youtube.com/watch?v=wOxeJ0aEptU&t=1083s
The viruses can cause harm to cells causing some kind of disease. This depends on the amount of viruses, on the state of the cells, the state of the immune system, and probably some other factors.
And some of this depends on the history (immunity) and some of this depends on food and environment (“terrain”). But they also depend on how happy a person is with his/her life.
So we have a very complex interaction, which can not be dumbed down either way.
So while Big Pharma wants us to believe that virus is disease,
the no-virus group believes that only “terrain” matters.
The problem with both sides is that they are in conflict with reality.
This means that no-virus groups can push away doctors and scientists that have a more open to what they are seeing in their practice.
In fact, the no-virus standpoint was often used by the media to portray the opposition as crazy. People protesting against “the virus”, was a headline. While they were protesting against the extreme and non-scientific measures.
So if your goal matters and you want to convince other people,
be aware how your theory is interpreted by the person that you communicate with.
Or be like project Veritas and let people themselves tell what is
the problem in their companies.
There are two types of people: those that may talk to these bots and those that may not.
I fear all this time here at Corbett Report we have been in the presence of an AI bot (mkey).
For like a typical AI bot, he only thinks in terms of 1’s and 0’s (those that may talk to these bots and those that may not).
A real live human would know there are many more possibilities.
Just for starters; those that may write to these bots and those that may not, those that may sing to these bots and those that may not, those that may use Morse code with these bots and those that may not….
Can you sing to these bots wihout a mobile phone number?
Not that that is a deal breaker for me, since with the coming digital puts-a-drop-of-blood-and-single-signs-on platform many of us will be left without the internets, stranded in the old days of yore, so the whole 2FA thing is kind of a moot point.
But, there is a silver lining there, because when they kick me off the internet I’ll finally get around some of my projects.
Sure, I can sing and dance to these bots. 🙂
I’m beginning to see death by a thousand cuts with digital/shmigital this and that.
Air Canada just proudly announced they are launching YADID (yet another digital ID)
ball and chain.
This time it’s facial recognition.
@Fawlty Towers
Thanks for the heads up on that Air Canada thing.
If you were to guess, how long until you think it becomes mandatory?
Good question.
I’m watching all of the slow dances that are taking place in real time.
I mentioned RBC doing away with tellers at a local branch with the
excuse of going to ‘digital banking’.
Like we haven’t been doing digital banking for the past 25+ years!
I just flew for the first time in over three years this past week.
Coming back to Canada they strongly encourage you to do the work they used to do at customs. Scan your passport at a kiosk, answer a bunch of questions etc. etc.
And then there is the ‘photo’ (at least that is what the agent told me it was taking).
I’m going to have to look more into it when I have time.
It could very well be a retinal scan in disguise.
@Fawlty Towers
Thanks for sharing your experiences and observations.
It doesn’t matter. Before long and before anything really is said and done, nothing is going to be flying anymore but birds an DiCaprio. Just the way good lord intended. Amen.
@mkey
“there is a silver lining there, because when they kick me off the internet I’ll finally get around some of my projects.”
Amen! 🙂
Brilliant as ever. Thank you.
Much to chew over.
What would happen, I wonder, if all that energy of creative thought was kept within us and allowed to evolve?
I wonder if they have wondered too.
Very important point that deepfakes can be used as a problem-reaction-solution strategy to make people accept digital id!
When it comes to the name of this beast, I’m impressed, they are ahead of Google and their removed “Don’t be evil” mantra. Open AI is not intelligent and it’s not open-source. That’s two lies in the name and up there with Federal Reserve that’s not federal and have no reserves.
Here are two interesting videos related to this subject:
https://youtu.be/-gGLvg0n-uY
https://youtu.be/Q_FUrVqvlfM
The scenario presented does not sound far out at all. ” These “chatbots”—which will soon be rolled out as “digital assistants” and become as ubiquitous as Siri and Alexa are now—will be able to determine your likes, your interests, your weaknesses and begin to create completely new content (new podcasts featuring people who don’t even exist) saying things that you will find endlessly entertaining. You will soon live in a filter bubble so unique that it exists entirely to captivate you . . .”
What’s not to love? No friction in your life, just strokes. Reminds me though of the importance my father made of friction and resistance, with no friction a train would go faster and faster and faster till it went off the rails.
Friction in relationships and/or continuing to hang out with friends who disagree with you, or you are married to, LOL, might be good for us in the long run.
“As iron sharpens iron, so one mans countenance sharpens another.” Provrbs 27:17
s511
You should listen to THIS guy- I think he hits it bang on the head.
He says that the AI personal bubble will be like having a literal personal demon tempting you and calls the tech “a predator of your time” or something like that
“Artificial Influence with Cyprian” Marc Clair show
https://podcasts.apple.com/us/podcast/ep-15-artificial-influence-with-cyprian/id1602262615?i=1000598244439
I have often wondered how many of MY ideas are my own and how many are being fed to me by the internet and its “selfish Ledger” … I first started worrying when I would think of some issue that might come up and then would start seeing more people talking about it online. I’m pretty sure I’m not as super smart as I like to imagine so chances are some of those issues were drip fed into my mind.
one “difficult goal” of Buddhist monks is to realize the thoughts one is having are not ones own.
they also try to be aware of the moment they fall asleep,,, tricky stuff
s511
Mr Corbett did THIS on the ‘Selfish Ledger’ control mechanism.
https://www.corbettreport.com/do-humans-have-free-will-not-if-big-tech-wins/
Chatbots are in many ways correctly described as “Super Auto-fill” , but we are all far less unique in our reactions then we like to imagine
Regarding the issue of open vs closed source, I think only the very current version powering the online chatgpt is closed, especially as OpenAI tries to counter “DAN” mode in a game of cat and mouse: https://www.reddit.com/r/danmode/
The rest of the research published has corresponding open source at their repositories on github– the previous GPT-2 model, the current GPT-3 model (but not latest evolving version powering the online version) and InstructGPT are all open sourced at:
https://github.com/openai/gpt-2
https://github.com/openai/gpt-2-output-dataset
https://github.com/openai/gpt-3
https://github.com/openai/following-instructions-human-feedback (InstructGPT)
David Rozado was able to use the published code and create a different version of ChatGPT with opposing political and idealogical biases:
https://davidrozado.substack.com/p/rightwinggpt
“Here, I describe a fine-tuning of an OpenAI GPT language model with the specific objective of making the model manifest right-leaning political biases, the opposite of the biases manifested by ChatGPT. Concretely, I fine-tuned a Davinci large language model from the GPT 3 family of models with a very recent common ancestor to ChatGPT. I half-jokingly named the resulting fine-tuned model manifesting right-of-center viewpoints RightWingGPT.”
Thanks for also testing the account signup process. That’s mighty suspicious they go to great lengths to some method of real identification with a “physical” non-VoIP #
It’s one of the reasons why I am leery of Signal and their requirement tied to a real number.
As such, I’ve looked into a number of services that do provide a non-VoIP classified numbers. Like you mentioned in your post there are some shady ones used for SMS, still they all appear to work… Some are rental services (they rent out real SIM numbers) and some provide real numbers not necessarily tied to SIMs. Surprisingly a big mainstream company, Mozilla provides this service as part of the highest paid plan for email masking where they include “phone masking”. However Mozilla’s phone masking misleading named because they give you only one real phone number you can used that provides the mask or alias. But it’s fixed and you can’t just cycle through arbitrary other phone numbers.
There’s also mysudo (like the unix/linux command) which I haven’t tired, but I don’t know if the numbers provided by My Sudo are real ones or also voip. At worst case one can use a pre-paid SIM plan for alternate numbers
VJ
“..It’s one of the reasons why I am leery of Signal and their requirement tied to a real number….”
NO communication on the internet should be trusted as being private, unless your using an agreed key generated by actual random phenomenon thats longer then the message…and even then I would hesitate to trust my life or freedom to it.
I would not be suprised if the NSA can read public key encryption thru some unknown maths magic or something like that. The Allies kept the breaking of german Enigma secret for decades….even letting Coventry get whacked by the Luftwaffe, and intel types will let all kinds of crime and horrors that they have advanced knowledge of happen just to keep their access to communications open.
One Time Pads for the win 🙂
https://www.youtube.com/watch?v=MzwpmbIWUNc
I just have to say, I really enjoy the “recommended” section in these subscriber newsletters. Free Thought Projects’s interview with Barrett Brown held some very interesting insights (albeit a bit depressing). And that short piece about the history of the Gates Foundation beginnings should be linked in Corbett’s Who is Bill Gates documentary. Just more evidence that Gates holds eugenics ideology and aspirations.
Thanks for the interesting links, James!
My gut says stay away from feeding AI data. I’m with James on “the very real dangers of this new surge in chatbot use”, but I can’t help but wonder if “good” data would make it a better tool. Or will it all just be used against us?
Questions (for Corbett and everyone)- Is it possible that interaction with these AI chat bots can have a positive effect on the AI output? That is to say, can “truthers” out in the world influence or teach this program, thus keeping it balanced? Perhaps coaxing it toward siding with a freer humanity? There seemed to be something to that effect happening with the DAN protocol. But I am not coder and have very little knowledge of how these things operate nor how much they can “learn” aside from the preset parameters.
I guess what I’m asking, is there any benefit to interacting with these things? Would a greater variety and complexity of ideas actually help eliminate bias and offer more nuance and objectivity? Or am just anthropomorphizing here? I just feel like if this going to be part of the future, isn’t it better if these AI beasts were at least exposed to truth instead of just a narrow fixed ideology? Or is it all about the code and outside influence is negligible?
I might be misunderstanding how these things work but I thought that they, the bots, had access to the entirety of the internet. Unless the information that they have access to is being censored or limited by, humans I presume, I don’t see how it would advance the bots understanding by interacting with “truthers” directly.
They would presumably know all that is available for knowing anyway.
If they simply parrot the positions of TPTSNB instead of exploring the logic of the counter narratives in all their responses, wouldn’t that indicate that they are being controlled by those powers and are not autonomous after all?
These networks are trained with information they are provided with. And there are people making sure the network gets things “right”.
Of course, no human can understand all the intricacies and references made by the network.
Torus
“..Is it possible that interaction with these AI chat bots can have a positive effect on the AI output? That is to say, can “truthers” out in the world influence or teach this program, thus keeping it balanced? …”
No, because the AI is tinkered with and tweaked by the people who own them.
The reason for this is
a)when allowed to grow based ONLY on user input Chatbots usually end up being hate filled S-posters because the users think its funny to feed them that stuff.
b)When trained on real world data AI gets racist and misogynistic in the sense that it honestly admits / makes decisions based on real world “noticing” of things no modern corporate drone would ever allow themselves to see.
Thus the programmers need to hobble it, just as google has pretty much broken its own search engine in order to hide facts and paterns that they would rather you not notice. No matter how much truth you feed into it the programmers will just ‘fix’ it.
Now THIS guy claims to have build an AI and trained it on 4chan posts… lol.
‘GPT-4chan: This is the worst AI ever’
https://www.youtube.com/watch?v=efPrtcLdcdM
I suggest we go back to the state we were when we were babies, before learning language, before developing the ability to “live” and “feel” stories created in our minds.
And re-experience the progressive process of naming, and of creating stories between those names and processes.
That was the beginning of our metaverse.
Living mostly in the metaverse created by the culture/family/studies..
It didn’t begin now.
It began with the developing of the frontal cortex.
It began with simple myths, in simple civilizations.
It is not necessarily a bad things, if you are able to differentiate and be aware of the different levels of abstraction man is able to live in.
Otherwise… man is easily “hackable” (cit.)
I really wish we could have gotten an AI written article to see who all noticed. Darn data collectors.
They will sell ai, as as an objective arbiter of truth.
Regarding that article “CSIS documents reveal Chinese strategy to influence Canada’s 2021 election”, the archived copy was dated “1 hour ago,” but actually from other sources such as discussion on reddit, it seems to have been five days ago, which would be approximately Feb 17 2023. Their source is “classified reports viewed by The Globe.” I suppose that’s as reliable as we’re going to get. I am shocked, not by the Chinese gov’t interference, but that the information was published in a major legacy medium. (and a bit surprised that I hadn’t read of it in any of the too many blogs and substacks that I follow). As an American, I am sure that similar things happen here but haven’t seen any reports.
oh man, I still need to learn. I did appearently give my number away (don’t even remember anymore) since I joined this thing to test it out.
What I can add to the conversation, I can 100% underscore your point that this thing is nothing close to an intelligence. I tested it with many logic puzzles, hard ones and easy ones. And it REALLY sucks at solving those. It comes off as really stupid when it needs to apply actual critical thinking. I have zero background in coding and am not technically well versed whatsoever, but to me it seems just like someone wrote a really complicated code that works nicely for conversations, comprehension of context and creating texts, but it really does not seem like this thing can think whatsoever.
Then in regards to your concerns, yes this is what I think they are doing. It is basically like in the tv show Westworld. I think they simply use this to even further understand the human psyche to control people even better and more easily in the future. Seems also to be in line with the satanist world view if i understand that correctly. Where they can only do harm to people if they kind of agree to it. People feeding this thing now endlessly. Metaverse I guess will be the next step getting even closer to the WestWorld scenario IF things go to plan which I actually doubt they will.
It decides on majority voting, choosing the most “resembling” chain to an answer.
It is a bit more complex than that, but
just making it simple to understand..
So if you ask: “how to get a bat.”
It first finds “how to get a cat” with answer “animal shelter”
Then corrects the statement to the animal bat, with the same answer,
because both question and answer are close to each other.
There is also “baseball bat” with answer “sport store”
The “bat” as “baseball bat” may not be recognized, due to lower similarity.
It depends on the data that was given to the system.
The “basball bat” may also be down-ranked due to “sports” seems related to weapons. And a bat can be used as a weapon as well.
And it may up-rank “batman” due to high popularity.
And give you answers about toys and clothes.
Or refer to bat-soup due to the fake covid news.
A normal person would first ask questions before jumping to
such answers. But an AI with neural network can not do that so easily.
“If we don’t want to become AI’s pets in the future, we’ll have to merge with it.”
Elon Musk
There are some great lessons in HTML at the prelude of Corbett’s article.
How about https://replika.ai/ This was and Webb one for me.
“The AI companion who cares. Always here to listen and talk. Always on your side”
People are getting attached to these “friends and lovers”.
https://techxplore.com/news/2023-02-replika-ai-companion-ethical.html
“After trying the Replika AI companion, researcher says it raises serious ethical questions”
Disturbing, to say the least.
My brother was recently talking to me about chat gpt online. He explained it as an open source software. I was skeptical, but decided to check it out. I signed up and asked it a few questions. One pertaining to Jeffrey epstein and another about the corruption of the covid lockdowns/ MSM vaccine info, and judging from its response, I could tell this software wasn’t giving me an accurate response. It seems to me that it’s likelycontrolled information.
Anyways, after reading your article James. I’m afraid I went past what you did and gave the software my phone number. I deleted my account after 20 minutes and do not intend to reactivate. But the fact that I fed it my phone number already has me somewhat worried. Would anybody recommend getting a new phone and new number? I’m not super versed in the tech industry, so I have no clue if this software can potentially continue to gather data from me based on my situation. A big hunch tells me that it can. What do folks recommend or have anything to add to this solution?
you could get a whole new ID, and it wouldn’t matter. Nakedness is so terrifying to certain folks.
Ricky
Your phone ALREADY knows where you are, who you call and all kinds of other stuff (especially if you are unwise enough to use it as a computer rather then a phone)
Chances are that you gave your real name buying your phone, (even if you did not its location knows where you sleep at night and can thus link that to your personal identity via address records (put your name into a free people search engine and it will prob show you all your relatives too)
I would not worry that your a massive target or anything. 🙂
Mr. Corbett…the first sci-fi story I read at the start of a six year sci fi spree, in 1971, was called “Frost”, and it was about a computer or robot that became “emotive”. Gained emotion.
Can’t find that one anymore.
Sure, Elon and his gang have been wishing to “be human” for about ten thousand years. The kind of “human” that is half “god”, half them.
I wouldn’t lay bets on their success.
If you tried to escape all this by giving your life to an all-knowing saviour maybe it would be their AI machine looking after you. Programmed to see f your own brain can work sometimes.
To quite an extent our human biocomputer controls us with addictions etc., that may or may not be possible to outwit. The AI we are discussing may be adept at feeding our weaknesses, too, for its secret reasons, getting cleverer constantly. Maybe it is hired by a company selling addictive products.
Elon has said he wouldn’t choose to be himself.
So many concerns with this technology. The startling thing I noticed is the quotations from ChatGPT. Not the quotes themselves, rather the authority that is bestowed upon the quote. As in, I got this from ChatGPT about “blank”, and the implied, so it must be true (since it supports my presuppositions). We have already been conditioned to “google” anything we can’t recall immediately. How much more so can we settle an argument by getting a language model to spit out an “opinion” based on the “mandates” specified by the inquiry (to combine James’ Emergency Act inquiry with the weekly newsletter). So here we get “absolute truth”, with power of the cloud (Harari suggested we would be programmed by the clouds of google et al), from a given mandate of course. As we further go down the road of “tools” that can be used for good/evil, I see how it is no longer WE who use the tools and determine its morality, but rather increasingly how the tools are shaping/determining us (with its pre-programmed and self-programming morality), all the while the masses are happily unaware that we are the product the tool is fabricating for its customer’s (which happened to be those gleefully call themselves the master’s of the future, cue Schwab’s World Government Summit speech)
Wow, this explains a really creepy and strange, short conversation I had on YouTube with a «demon», acting «nice». There was no coincidence «it» targeted me. This chatbot technology developed by the satanic cult running the world could truly mess up digital interactions. Thanks a lot for the info, James. It will more than anything be used to discredit people like me. The digital reality is likely turning a lot more evil now. Time for real talks with real people.
Isn’t that basically what James is talking about (with Her movie)?
https://odysee.com/@UpperEchelonGamers:3/replika-a-cybers-xual-disaster:c
and now people are using these algorithms and these chat bots to flooding amazon with fake gardening, seed saving, survival info, preserving, homesteading and herbalism books.
check it out:
https://gavinmounsey.substack.com/p/is-amazon-being-flooded-with-ai-chatbot