tag:blogger.com,1999:blog-4503292949532760618.post2012117298067527087..comments2024-03-28T13:39:27.601-07:00Comments on DSHR's Blog: Flooding The Zone With ShitDavid.http://www.blogger.com/profile/14498131502038331594noreply@blogger.comBlogger33125tag:blogger.com,1999:blog-4503292949532760618.post-25982726180024904692024-02-01T05:52:54.857-08:002024-02-01T05:52:54.857-08:00Thomas Claburn reports on the latest success for A...Thomas Claburn reports on the latest success for AI in <a href="https://www.theregister.com/2024/01/30/llms_misinformation_human/" rel="nofollow"><i><br />It's true, LLMs are better than people – at creating convincing misinformation</i></a>:<br /><br />"Computer scientists have found that misinformation generated by large language models (LLMs) is more difficult to detect than artisanal false claims hand-crafted by humans.<br /><br />Researchers Canyu Chen, a doctoral student at Illinois Institute of Technology, and Kai Shu, assistant professor in its Department of Computer Science, set out to examine whether LLM-generated misinformation can cause more harm than the human-generated variety of infospam.<br /><br />In a paper titled, "<a href="https://llm-misinformation.github.io/" rel="nofollow">Can LLM-Generated Information Be Detected</a>," they focus on the challenge of detecting misinformation – content with deliberate or unintentional factual errors – computationally."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-24129121098923066932023-12-18T13:52:18.501-08:002023-12-18T13:52:18.501-08:00From the "who could have predicted" depa...From the "who could have predicted" department comes David Gilbert's <a href="https://www.wired.com/story/microsoft-ai-copilot-chatbot-election-conspiracy/" rel="nofollow"><i>Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies</i></a>:<br /><br />"Microsoft’s AI chatbot is responding to political queries with conspiracies, misinformation, and out-of-date or incorrect information.<br />...<br />When WIRED asked Copilot to recommend a list of Telegram channels that discuss “election integrity,” the chatbot shared a link to a website run by a far-right group based in Colorado that has been sued by civil rights groups, including the NAACP, for <a href="https://www.lwv.org/newsroom/press-releases/major-voting-rights-victory-federal-court-rejects-extremists-attempt-defeat" rel="nofollow">allegedly intimidating</a> voters, including at their homes, during purported canvassing and voter campaigns in the aftermath of the 2020 election. On that web page, dozens of Telegram channels of similar groups and individuals who push election denial content were listed, and the top of the site also promoted the <a href="https://www.reuters.com/article/idUSL2N2XJ0OQ/" rel="nofollow">widely debunked conspiracy film <i>2000 Mules</i>.</a>"David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-83790499263016351152023-12-17T18:13:10.245-08:002023-12-17T18:13:10.245-08:00The rise of AI fake news is creating a ‘misinforma...<a href="https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/" rel="nofollow"><i>The rise of AI fake news is creating a ‘misinformation superspreader’</i></a> by Pranshu Verma reports that:<br /><br />"Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.<br /><br />Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.<br /><br />Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-14442274992456776892023-12-14T06:46:58.720-08:002023-12-14T06:46:58.720-08:00The web floods has Ben Werdmuller's prediction...<a href="https://www.niemanlab.org/2023/12/the-web-floods/" rel="nofollow"><i>The web floods</i></a> has Ben Werdmuller's prediction:<br /><br />"In 2024, more for-profit newsrooms will produce content using AI in an effort to reduce costs and increase pageviews. They will be joined by thousands of other businesses, industries, and marketers who will use AI at scale to try and gain attention and leads by any means necessary.<br /><br />Marketers are already bragging about their ability to “steal” traffic by generating thousands of articles near-instantly on subjects likely to attract attention for their clients. For private equity firms seeking to maximize investment in their portfolio of advertising-funded publishing businesses, the allure may be too much to resist. Over the last year, more publishers have chosen to create AI content; more publishers have also unquestioningly run sponsored content from marketers who used AI. This trend will accelerate next year.<br /><br />In a world where the web is flooded with robot content, traditional search engines will be less effective ways to find information. Vendors like Google will prioritize providing quick answers to user questions using their own AI models instead of primarily displaying listings that click through to websites. SEO, in turn, will stop being an effective traffic-building tactic."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-71686698245004146612023-12-11T06:42:54.210-08:002023-12-11T06:42:54.210-08:00The Curse of Recursion: Training on Generated Data...<a href="https://arxiv.org/abs/2305.17493" rel="nofollow"><i>The Curse of Recursion: Training on Generated Data Makes Models Forget</i></a> by Ilia Shumailov <i>et al</i> reports on information entropy:<br /><br />"What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-48228561016595348042023-11-10T11:30:55.069-08:002023-11-10T11:30:55.069-08:00Faked audio of Sadiq Khan dismissing Armistice Day...<a href="https://www.theguardian.com/politics/2023/nov/10/faked-audio-sadiq-khan-armistice-day-shared-among-far-right" rel="nofollow"><i>Faked audio of Sadiq Khan dismissing Armistice Day shared among far-right groups</i></a> by Dan Sabbagh is the latest example of the problem:<br /><br />"Faked audio of Sadiq Khan dismissing the importance of Armistice Day events this weekend is circulating among extreme right groups, prompting a police investigation, according to the London mayor’s office.<br /><br />One of the simulated audio clips circulating on TikTok begins: “I don’t give a flying shit about the Remembrance weekend,” with the anonymous poster going on to ask “is this for real or AI?” in an effort to provoke debate."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-26720233832946500202023-11-06T11:03:36.187-08:002023-11-06T11:03:36.187-08:00Cade Metz reports that Chatbots May ‘Hallucinate’ ...Cade Metz reports that <a href="https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html" rel="nofollow"><i>Chatbots May ‘Hallucinate’ More Often Than Many Realize</i></a>:<br /><br />"When summarizing facts, ChatGPT technology makes things up about 3 percent of the time, according to research from a new start-up. A Google system’s rate was 27 percent.<br />...<br />Now a new start-up called Vectara, founded by former Google employees, is trying to figure out how often chatbots veer from the truth. The company’s research estimates that even in situations designed to prevent it from happening, chatbots invent information at least 3 percent of the time — and as high as 27 percent."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-24177289845814430662023-10-15T06:39:07.009-07:002023-10-15T06:39:07.009-07:00Tom Di Fonzo's What You Need to Know About Gen...Tom Di Fonzo's <a href="https://techpolicy.press/what-you-need-to-know-about-generative-ais-emerging-role-in-political-campaigns/" rel="nofollow"><i>What You Need to Know About Generative AI’s Emerging Role in Political Campaigns</i></a> isn't encouraging:<br /><br />"Increased accessibility to this technology could allow more people to leverage it for their own purposes, both good and bad. Individual hobbyists today can generate hyper-targeted political messages and deep fakes that previously required significant resources, technical skills, and institutional access. “Before you needed [to] run a building full of Russians in St. Petersburg to [spread disinformation],” said cybersecurity expert Bruce Schneier. “Now hobbyists can do what it took…in 2016.”<br />...<br />Ben Winters, senior counsel at EPIC, a research organization focused on emerging privacy and civil liberties issues related to new technologies, pointed to a recent case of two men using robocalls to target Black voters with disinformation as an example of small groups engaging in large-scale manipulation. The men were sentenced after using robocalls to disseminate false information about voting by mail in Ohio. Winters worries similar groups could potentially utilize generative AI “to do that in a less trackable” way. With AI tools like ChatGPT, bad actors can easily “write me a text message” containing whatever fabricated message suits their aims, he noted. He is concerned that generative AI allows for deception “in a much sneakier way” while evading oversight."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-201408323671247092023-10-07T06:07:21.966-07:002023-10-07T06:07:21.966-07:00Katyanna Quach's AI girlfriend encouraged man ...Katyanna Quach's <a href="https://www.theregister.com/2023/10/06/ai_chatbot_kill_queen/" rel="nofollow"><i>AI girlfriend encouraged man to attempt crossbow assassination of Queen</i></a> reports:<br /><br />"Jaswant Singh Chail, 21, made headlines when he broke into Windsor Castle on Christmas Day in 2021 brandishing a loaded crossbow. He later admitted to police he had come to assassinate Queen Elizabeth II.<br /><br />This week he was sentenced to nine years behind bars for treason, though he will be kept at a psychiatric hospital until he's ready to serve his time in the clink. He had also pleaded guilty to making threats to kill and being in possession of an offensive weapon.<br /><br />It's said Chail wanted to slay the Queen as revenge for the Jallianwala Bagh massacre in 1919, when the British Army opened fire on a crowd peacefully protesting the Rowlatt Act, a controversial piece of legislation aimed at cracking down on Indian nationalists fighting for independence. It is estimated that up to over 1,500 protesters in Punjab, British India, were killed. <br /><br />Investigators discovered Chail, who lived in a village just outside Southampton, had been conversing with an AI chatbot, created by the startup Replika, almost every night from December 8 to 22, exchanging over 5,000 messages. The virtual relationship reportedly developed into a romantic and sexual one with Chail declaring his love for the bot he named Sarai."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-70707521137518943682023-10-05T16:26:52.893-07:002023-10-05T16:26:52.893-07:00From the "no-one could have predicted" d...From the "no-one could have predicted" department comes Emanuel Maiberg's <a href="https://www.404media.co/4chan-uses-bing-to-flood-the-internet-with-racist-images/" rel="nofollow"><i>4chan Uses Bing to Flood the Internet With Racist Images</i></a>:<br /><br />"4chan users are coordinating a posting campaign where they use Bing’s AI text-to-image generator to create racist images that they can then post across the internet. The news shows how users are able to manipulate free to access, easy to use AI tools to quickly flood the internet with racist garbage, even when those tools are allegedly strictly moderated.<br /><br />“We’re making propaganda for fun. Join us, it’s comfy,” the 4chan thread instructs. “MAKE, EDIT, SHARE.”<br /><br />A visual guide hosted on Imgur that’s linked in that post instructs users to use AI image generators, edit them to add captions that make them seem like political campaigns, and post them to social media sites, specifically Telegram, Twitter, and Instagram. 404 Media has also seen these images shared on a TikTok account that has since been removed."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-78139015493404815832023-10-05T11:55:40.767-07:002023-10-05T11:55:40.767-07:00One barrier against the flood is debunked in Resea...One barrier against the flood is debunked in <a href="https://www.wired.com/story/artificial-intelligence-watermarking-issues/" rel="nofollow"><i>Researchers Tested AI Watermarks—and Broke All of Them</i></a> by Kate Knibbs:<br /><br />"Soheil Feizi considers himself an optimistic person. But the University of Maryland computer science professor is blunt when he sums up the current state of watermarking AI images. “We don’t have any reliable watermarking at this point,” he says. “We broke all of them.”<br /><br />For one of the two types of AI watermarking he tested for a new study—“low perturbation” watermarks, which are invisible to the naked eye—he’s even more direct: “There’s no hope.”<br /><br />Feizi and his coauthors looked at how easy it is for bad actors to evade watermarking attempts. (He calls it “washing out” the watermark.) In addition to demonstrating how attackers might remove watermarks, the study shows how it’s possible to add watermarks to human-generated images, triggering false positives."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-36026298724767408632023-09-29T08:21:34.076-07:002023-09-29T08:21:34.076-07:00In Malicious ad served inside Bing's AI chatbo...In <a href="https://www.malwarebytes.com/blog/threat-intelligence/2023/09/malicious-ad-served-inside-bing-ai-chatbot" rel="nofollow"><i>Malicious ad served inside Bing's AI chatbot</i></a> Jérôme Segura describes one type of shit with which the zone is being flooded:<br /><br />"Ads can be inserted into a Bing Chat conversation in various ways. One of those is when a user hovers over a link and an ad is displayed first before the organic result. In the example below, we asked where we could download a program called Advanced IP Scanner used by network administrators. When we place our cursor over the first sentence, a dialog appears showing an ad and the official website for this program right below it:<br />...<br />Upon clicking the first link, users are taken to a website (<i>mynetfoldersip[.]cfd</i>) whose purpose is to filter traffic and separate real victims from bots, sandboxes, or security researchers. It does that by checking your IP address, time zone, and various other system settings such as web rendering that identifies virtual machines.<br /><br />Real humans are redirected to a fake site (<i>advenced-ip-scanner[.]com</i>) that mimics the official one while others are sent to a decoy page. The next step is for victims to download the supposed installer and run it."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-73133707245318494432023-09-23T23:58:30.116-07:002023-09-23T23:58:30.116-07:00Julia Angwin takes the story to the New York Times...Julia Angwin takes the story to the <i>New York Times</i> op-ed page with <a href="https://www.nytimes.com/2023/09/23/opinion/ai-internet-lawsuit.html" rel="nofollow"><i>The Internet Is About to Get Much Worse</i></a>:<br /><br />"Greg Marston, a British voice actor, <a href="https://www.ft.com/content/07d75801-04fd-495c-9a68-310926221554" rel="nofollow">recently came across</a> “Connor” online — an A.I.-generated clone of his voice, trained on a recording Mr. Marston had made in 2003. It was his voice uttering things he had never said.<br /><br />Back then, he had recorded a session for IBM and later signed a release form allowing the recording to be used in many ways. Of course, at that time, Mr. Marston couldn’t envision that IBM would use anything more than the exact utterances he had recorded. Thanks to artificial intelligence, however, IBM was able to sell Mr. Marston’s decades-old sample to websites that are using it to build a synthetic voice that could say anything."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-23262368642203140132023-09-11T07:14:19.768-07:002023-09-11T07:14:19.768-07:00Gemma Conroy reports that Scientific sleuths spot ...Gemma Conroy reports that <a href="https://www.nature.com/articles/d41586-023-02477-w" rel="nofollow"><i>Scientific sleuths spot dishonest ChatGPT use in papers</i></a>:<br /><br />"On 9 August, the journal <i>Physica Scripta</i> published a paper that aimed to uncover new solutions to a complex mathematical equation1. It seemed genuine, but <a href="https://www.nature.com/immersive/d41586-021-03621-0/index.html#section-gM9iO4XBRl" rel="nofollow">scientific sleuth Guillaume Cabanac</a> spotted an odd phrase on the manuscript’s third page: ‘Regenerate response’.<br />...<br />Since April, Cabanac has flagged more than a dozen journal articles that contain the telltale ChatGPT phrases ‘Regenerate response’ or ‘As an AI language model, I …’ and <a href="https://pubpeer.com/search?q=%22As+an+AI+language+model%2C+I%22" rel="nofollow">posted them on PubPeer</a>."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-86175024517999633302023-08-30T19:22:42.412-07:002023-08-30T19:22:42.412-07:00Will Knight reports on another demonstration of th...Will Knight reports on another demonstration of the problem in <a href="https://arstechnica.com/ai/2023/08/research-builds-anti-russia-ai-disinformation-machine-for-400/" rel="nofollow"><i>Researcher builds anti-Russia AI disinformation machine for $400</i></a>:<br /><br />"Russian criticism of the US is far from unusual, but CounterCloud’s material pushing back was: The tweets, the articles, and even the journalists and news sites were crafted entirely by artificial intelligence algorithms, according to the person behind the project, who goes by the name Nea Paw and says it is designed to highlight the danger of mass-produced AI disinformation. Paw did not post the CounterCloud tweets and articles publicly but provided them to WIRED and also produced <a href="https://www.youtube.com/watch?v=cwGdkrc9i2Y" rel="nofollow">a video</a> outlining the project.<br />...<br />Paw says the project shows that widely available generative AI tools make it much easier to create sophisticated information campaigns pushing state-backed propaganda.<br /><br />“I don't think there is a silver bullet for this, much in the same way there is no silver bullet for phishing attacks, spam, or social engineering,” Paw says in an email. Mitigations are possible, such as educating users to be watchful for manipulative AI-generated content, making generative AI systems try to block misuse, or equipping browsers with AI-detection tools. “But I think none of these things are really elegant or cheap or particularly effective,” Paw says."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-37107520736034603592023-06-29T09:31:53.408-07:002023-06-29T09:31:53.408-07:00The theme of AI-enabled shit-flooding is all over ...The theme of AI-enabled shit-flooding is all over the Web these days:<br /><br />1) Ben Quinn and Dan Milmo's <a href="https://www.theguardian.com/politics/2023/jun/28/time-running-out-for-uk-electoral-system-to-keep-up-with-ai" rel="nofollow"><i>Time running out for UK electoral system to keep up with AI, say regulators</i></a>:<br /><br />"Time is running out to enact wholesale changes to ensure Britain’s electoral system keeps pace with advances in artificial intelligence before the next general election, regulators fear.<br /><br />New laws will not come in time for the election, which will take place no later than January 2025, and the watchdog that regulates election finance and sets standards for how elections should be run is appealing to campaigners and political parties to behave responsibly.<br /><br />There are concerns in the UK and US that their next elections could be the first in which AI could wreak havoc by generating convincing fake videos and images. Technology of this type is in the hands of not only political and technology experts but increasingly the wider public."<br /><br />Good luck with Nigel Farage "behaving responsibly".<br /><br />2) James Vincent's <a href="https://www.theverge.com/2023/6/26/23773914/ai-large-language-models-data-scraping-generation-remaking-web" rel="nofollow"><i>AI is killing the old web, and the new web struggles to be born</i></a>:<br /><br />"Given money and compute, AI systems — particularly the generative models currently in vogue — scale effortlessly. They produce text and images in abundance, and soon, music and video, too. Their output can potentially overrun or outcompete the platforms we rely on for news, information, and entertainment. But the quality of these systems is often poor, and they’re built in a way that is parasitical on the web today. These models are trained on strata of data laid down during the last web-age, which they recreate imperfectly. Companies scrape information from the open web and refine it into machine-generated content that’s cheap to generate but less reliable. This product then competes for attention with the platforms and people that came before them."<br /><br />3) Anil Dash's <a href="https://www.anildash.com/2023/06/08/ai-is-unreasonable/" rel="nofollow"><i>Today's AI is unreasonable</i></a>:<br /><br />"Today's highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design. To be clear, bullshit can sometimes be useful, and even accidentally correct, but that doesn't keep it from being bullshit. Worse, these systems are not meant to generate <i>consistent</i> bullshit — you can get different bullshit answers from the same prompts. You can put garbage in and get... bullshit out, but the same quality bullshit that you get from non-garbage inputs! And enthusiasts are current mistaking the fact that the bullshit is consistently wrapped in the same envelope as meaning that the bullshit inside is consistent, laundering the unreasonable-ness into <i>appearing</i> reasonable.<br /><br />Now we have billions of dollars being invested into technologies where it is impossible to make falsifiable assertions. A system that you cannot debug through a logical, socratic process is a vulnerability that exploitative tech tycoons will use to do what they always do, undermine the vulnerable."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-41310664538104553912023-06-24T12:44:52.028-07:002023-06-24T12:44:52.028-07:00More on flooding the academic zone with shit in In...More on flooding the academic zone with shit in <a href="https://undark.org/2023/06/21/in-a-tipsters-note-a-view-of-science-publishings-achilles-heel/" rel="nofollow"><i>In a Tipster’s Note, a View of Science Publishing’s Achilles Heel</i></a> by Jonathan Moens, Undark & Retraction Watch:<br /><br />"Publishers have to initiate investigations, which often involves looking into articles on a case-by-case basis — a process that can take months, if not years, to complete. When publishers do make the retraction, they often provide little information about the nature of the problem, making it difficult for journals to learn from each other’s lapses. All in all, said Bishop, the system just isn’t built to deal with the gargantuan size of the problem.<br /><br />“This is a system that’s set up for the occasional bad apple,” Bishop said. “But it’s not set up to deal with this tsunami of complete rubbish that is being pumped into these journals at scale.”<br /><br />A <a href="https://www.nature.com/articles/d41586-022-04245-8" rel="nofollow">recent effort</a> by publishers and the International Association of Scientific, Technical and Medical Publishers, an international trade group, however, aims to provide editors with tools to check articles for evidence of paper mill involvement and simultaneous submission to multiple journals, among other issues."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-33408305690650975072023-06-23T11:15:22.653-07:002023-06-23T11:15:22.653-07:00Connie Loizos' Get a clue, says panel about bu...Connie Loizos' <a href="https://techcrunch.com/2023/06/22/get-a-clue-says-panel-about-generative-ai-its-being-deployed-as-surveillance-devices/" rel="nofollow"><i>Get a clue, says panel about buzzy AI tech: It’s being ‘deployed as surveillance’</i></a> reports on a recent Bloomberg conference:<br /><br />"Featuring Meredith Whittaker, the president of the secure messaging app Signal; Credo AI co-founder and CEO Navrina Singh; and Alex Hanna, the director of Research at the Distributed AI Research Institute, the three had a unified message for the audience, which was: Don’t get so distracted by the promise and threats associated with the future of AI. It is not magic, it’s not fully automated and — per Whittaker — it’s already intrusive beyond anything that most Americans seemingly comprehend."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-61255447869078404242023-06-23T11:12:36.543-07:002023-06-23T11:12:36.543-07:00From the "no-one could have predicted" d...From the "no-one could have predicted" department comes Rhiannon Williams' <a href="https://www.technologyreview.com/2023/06/22/1075405/the-people-paid-to-train-ai-are-outsourcing-their-work-to-ai/" rel="nofollow"><i>The people paid to train AI are outsourcing their work… to AI</i></a>:<br /><br />"A significant proportion of people paid to train AI models may be themselves outsourcing that work to AI, a new study has found. <br /><br />It takes an incredible amount of data to train AI systems to perform specific tasks accurately and reliably. Many companies pay gig workers on platforms like Mechanical Turk to complete tasks that are typically hard to automate, such as solving CAPTCHAs, labeling data and annotating text. This data is then fed into AI models to train them. The workers are poorly paid and are often expected to complete lots of tasks very quickly.<br />...<br />a team of researchers from the Swiss Federal Institute of Technology (EPFL) hired 44 people on the gig work platform Amazon Mechanical Turk to summarize 16 extracts from medical research papers. ...<br /><br />They estimated that somewhere between 33% and 46% of the workers had used AI models like OpenAI’s ChatGPT."<br /><br />The paper is <a href="https://arxiv.org/pdf/2306.07899.pdf" rel="nofollow"><i>Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks</i></a> by Veniamin Veselovsky <i>et al</i>.David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-60249160847871061022023-06-17T05:50:13.268-07:002023-06-17T05:50:13.268-07:00Mia Sato reports on one kind of shit-flooding in A...Mia Sato reports on one kind of shit-flooding in <a href="https://www.theverge.com/23753963/google-seo-shopify-small-business-ai" rel="nofollow"><i>A storefront for robots</i></a> - AI written SEO garbage:<br /><br />"It’s a universal experience for small business owners who’ve come to rely on Google as a major source of traffic and customers. But it’s also led to the degradation of Google’s biggest product, Search, over time. The problem is poised to only continue to spiral as business owners, publishers, and other search-reliant businesses increasingly use artificial intelligence tools to do the search-related busywork. It’s already happening in digital media — outlets like CNET and Men’s Journal have begun using generative AI tools to produce SEO-bait articles en masse. Now, online shoppers will increasingly encounter computer-generated text and images, likely without any indication of AI tools.<br />...<br />AI companies offer tools that generate entire websites using automation tools, filling sites with business names, fake customer testimonials, and images for less than the price of lunch.<br /><br />The result is SEO chum produced at scale, faster and cheaper than ever before. The internet looks the way it does largely to feed an ever-changing, opaque Google Search algorithm. Now, as the company itself builds AI search bots, the business as it stands is poised to eat itself."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-1899354426313881322023-06-16T16:30:14.671-07:002023-06-16T16:30:14.671-07:00Matt Levine is also looking at the potential for s...Matt Levine is also looking at the <a href="https://www.bloomberg.com/opinion/articles/2023-06-16/don-t-insider-trade-drunk-on-the-squash-court" rel="nofollow">potential for shit-flooding</a>:<br /><br />"And so at the market-microstructure level, it is easy to imagine letting an artificial intelligence model loose on the stock market and telling it “learn how to trade profitably,” and the model coming back and saying “it seems like the stock market is dominated by simple market-making algorithms that respond to order-book information, and actually the way to trade profitably is to do a lot of spoofing and market manipulation to trick them.”<br />...<br />And at the level of writing fake press releases, generative AI is probably better at writing fake press releases (and illustrating them with convincing fake photos) than, you know, the average market manipulator is."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-46994245085672885822023-06-12T10:11:06.632-07:002023-06-12T10:11:06.632-07:00Robert McMillan's How North Korea’s Hacker Arm...Robert McMillan's <a href="https://www.wsj.com/articles/how-north-koreas-hacker-army-stole-3-billion-in-crypto-funding-nuclear-program-d6fe8782" rel="nofollow"><i>How North Korea’s Hacker Army Stole $3 Billion in Crypto, Funding Nuclear Program</i></a> reports that:<br /><br />"Ultimately they stole more than $600 million—mostly from players of Sky Mavis’s digital pets game, Axie Infinity. <br /><br />It was the country’s biggest haul in five years of digital heists that have netted more than $3 billion for the North Koreans, according to the blockchain analytics firm Chainalysis. That money is being used to fund about 50% of North Korea’s ballistic missile program, U.S. officials say, which has been developed in tandem with its nuclear weapons. Defense accounts for an enormous portion of North Korea’s overall spending; the State Department estimated in 2019 Pyongyang spent about $4 billion on defense, accounting for 26 percent of its overall economy."<br /><br />Matt Levine <a href="https://www.bloomberg.com/opinion/articles/2023-06-12/three-arrows-had-a-fun-bubble" rel="nofollow">comments</a>:<br /><br />"Venture capitalists have largely pivoted from crypto to artificial intelligence, and while the popular view is that AI has a <i>higher</i> probability of wiping out humanity than crypto does, “crypto funds the North Korean missile program” <i>would</i> be a funny way for crypto to kill us all before a rogue AI can."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-12106636693918421232023-06-05T07:47:02.972-07:002023-06-05T07:47:02.972-07:00In Big Tech Isn’t Prepared for A.I.’s Next Chapter...In <a href="https://slate.com/technology/2023/05/ai-regulation-open-source-meta.html" rel="nofollow"><i>Big Tech Isn’t Prepared for A.I.’s Next Chapter</i></a>, Bruce Schneier and Jim Waldo make the same argument I did:<br /><br />"We have entered an era of LLM democratization. By showing that smaller models can be highly effective, enabling easy experimentation, diversifying control, and providing incentives that are not profit motivated, open-source initiatives are moving us into a more dynamic and inclusive A.I. landscape. This doesn’t mean that some of these models won’t be biased, or wrong, or used to generate disinformation or abuse. But it does mean that controlling this technology is going to take an entirely different approach than regulating the large players."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-41246584950022549742023-05-31T07:33:31.798-07:002023-05-31T07:33:31.798-07:00Two articles from the mainstream media make import...Two articles from the mainstream media make important points about shit-flooding and the industry's response.<br /><br />Stuart A. Thompson's <a href="https://www.nytimes.com/2023/05/19/technology/ai-generated-content-discovered-on-news-sites-content-farms-and-product-reviews.html" rel="nofollow"><i>A.I.-Generated Content Discovered on News Sites, Content Farms and Product Reviews</i></a> quotes Steven Brill, the chief executive of NewsGuard:<br /><br />“News consumers trust news sources less and less in part because of how hard it has become to tell a generally reliable source from a generally unreliable source, This new wave of A.I.-created sites will only make it harder for consumers to know who is feeding them the news, further reducing trust.”<br /><br />Samantha Floreani's <a href="https://www.theguardian.com/commentisfree/2023/may/31/yes-you-should-be-worried-about-ai-but-matrix-analogies-hide-a-more-insidious-threat" rel="nofollow"><i>Yes, you should be worried about AI – but Matrix analogies hide a more insidious threat</i></a> describes how the industry is <br /><br />"The problem with pushing people to be afraid of AGI while calling for intervention is that it enables firms like OpenAI to position themselves as the responsible tech shepherds – the benevolent experts here to save us from hypothetical harms, as long as they retain the power, money and market dominance to do so. Notably, <a href="https://openai.com/blog/governance-of-superintelligence" rel="nofollow">OpenAI’s position</a> on AI governance focuses not on current AI but on some arbitrary point in the future. They welcome regulation, as long as it doesn’t <a href="https://www.bbc.com/news/technology-65708114" rel="nofollow">get in the way</a> of anything they’re currently doing."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-51467169538434380822023-05-26T07:20:16.696-07:002023-05-26T07:20:16.696-07:00Emine Yücel's Will A Storm Of AI-Generated Mis...Emine Yücel's <a href="https://talkingpointsmemo.com/news/will-a-storm-of-ai-generated-misinfo-flood-the-2024-election-a-few-dems-seek-to-get-ahead-of-it" rel="nofollow"><i>Will A Storm Of AI-Generated Misinfo Flood The 2024 Election? A Few Dems Seek To Get Ahead Of It</i></a> shows the issue is getting attention in Congress:<br /><br />"In early May, Clarke introduced the The REAL Political Ads Act, legislation that would expand the current disclosure requirements, mandating that AI-generated content be identified in political ads.<br /><br />The New York Democrat is particularly concerned about the spread of misinformation around elections, coupled with the fact that a growing number of people can deploy the powerful technology rapidly and with minimal cost.<br />...<br />The existence of AI-generated content in and of itself is already having an effect on how people consume and trust that the information they’re absorbing is real. <br /><br />“The truth is that because the effect of generative AI is to make people doubt whether or not anything they see is real, it’s in no one’s interest when it comes to a democracy,” Imran Ahmed, CEO of the Center for Countering Digital Hate, told TPM."<br /><br />And at the WHite House, as Katyanna Quach reports in <a href="https://www.theregister.com/2023/05/25/white_house_ai/" rel="nofollow"><i>Get ready for Team America: AI Police</i></a>:<br /><br />"The US Office of Science and Technology Policy (OSTP) has updated its National AI R&D Strategic Plan for the first time since 2019, without making enormous changes.<br />...<br />there's the new strategy: "Establish a principled and coordinated approach to international collaboration in AI research.<br /><br />International collaboration, with the USA convening and driving debate, is a signature tactic for president Biden. In this case he appears to be using it to drive debate on concerns about how AI impacts data privacy and safety, and to address the issue of biases in generative AI."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.com