
Using Bots to Listen Music: How a $8M AI Fraud Exposed Streaming’s Dark Side
The digital age, for all its marvels, continues to unveil new frontiers for both innovation and exploitation. A recent high-profile case has brought into sharp focus the alarming trend of using bots to listen music, a deceitful practice designed to defraud streaming platforms and legitimate artists. Michael Smith, a North Carolina resident, recently pleaded guilty to orchestrating a sophisticated scheme involving hundreds of thousands of AI-generated songs and automated programs that fraudulently played these tracks billions of times, ultimately siphoning over $8 million in royalties from major streaming services like Amazon Music, Apple Music, Spotify, and YouTube Music. This elaborate fraud underscores a significant threat to the integrity of the music industry, raising critical questions about how artificial intelligence and automated systems are being weaponized for illicit gain.
What is the scheme of using bots to listen music?
The scheme of using bots to listen music, often referred to as “artificial streaming” or “stream manipulation,” involves employing automated software programs (bots) to simulate genuine human listening activity on music streaming platforms. These bots are designed to play songs repeatedly, often from various accounts and IP addresses, to inflate listen counts artificially. The primary goal of such a scheme is to generate fraudulent royalties, as streaming platforms typically pay artists and rights holders based on the number of plays their songs receive. This creates a fabricated demand for music that has no real audience, diverting funds from the legitimate pool of royalties intended for artists whose music is genuinely consumed by listeners.
Beyond simply inflating play counts, sophisticated bot operations can mimic complex user behavior to evade detection. This might include varying listening times, skipping tracks, creating playlists, and even interacting with other features on the platform, all to make the bot activity appear more organic. The perpetrators often upload vast quantities of music, sometimes generated by artificial intelligence, to maximize the potential for fraudulent earnings. By distributing these tracks across numerous fake accounts and employing large networks of bots, they can accumulate millions, if not billions, of fabricated streams, translating into significant illicit profits. This practice not only undermines the financial stability of real artists but also distorts consumption data, making it harder for platforms and industry professionals to identify genuine trends and popular music.
How did Michael Smith use bots to listen music and manipulate royalties?
Michael Smith’s method for using bots to listen music and manipulate royalties was particularly brazen and extensive. According to the U.S. Attorney’s Office for the Southern District of New York, Smith admitted to generating “hundreds of thousands of songs with AI” and then deploying “automated programs to fraudulently play his songs billions of times.” His operation was designed to create a self-sustaining loop where his AI-created music was “listened to” by his bots, triggering royalty payments from streaming services. He effectively manufactured both the product (AI music) and the consumer (bots) to extract funds.

Smith’s sophisticated setup allowed him to mimic genuine listener activity, making it difficult for platforms to immediately distinguish between real plays and automated ones. He managed to obtain over $8 million in royalties through this fraudulent activity. A prior investigation by Rolling Stone shed further light on the scale of his operation, revealing that Smith controlled 1,040 streaming accounts. Each of these accounts was reportedly playing approximately 636 songs per day. This meticulous, high-volume approach enabled him to generate an estimated $3,300 daily, or over $1.2 million annually, before his ultimate conviction. While some of the songs involved in his earlier schemes belonged to real musicians, the majority were pieces created using artificial intelligence, demonstrating his consistent reliance on AI to scale his fraudulent enterprise. His arrest in North Carolina in September 2024 marked the culmination of this elaborate scheme, leading to his guilty plea and a potential maximum sentence of five years in prison.
The broader impact of artificial streams on the music industry
The insidious practice of using bots to listen music has far-reaching and detrimental consequences for the entire music industry. Firstly, and most directly, it constitutes a massive financial drain. Streaming platforms operate on a “pro-rata” payment model, where a large pool of subscription revenue is divided among artists based on their share of total streams. When bots generate artificial streams, they siphon money from this common pool, directly reducing the revenue available for legitimate musicians and songwriters whose music is genuinely enjoyed by real consumers. This means that every dollar fraudulently earned by a bot farm is a dollar less for a deserving artist struggling to make a living.
Secondly, artificial streams distort market data and devalue legitimate artistry. Industry professionals, labels, and A&R executives rely on streaming numbers to identify emerging talent, track popularity, and make investment decisions. When these numbers are inflated by bots, the true popularity of artists becomes obscured. This can lead to misallocation of resources, misguided marketing strategies, and ultimately, a less equitable and meritocratic industry. Furthermore, it erodes trust in streaming charts and metrics, making it harder for genuine success stories to emerge and for fans to discover authentic music. The perception that success can be bought rather than earned through talent and hard work can be deeply discouraging for artists and damaging to the creative spirit of the industry. It also creates a “pay-to-play” environment where those with the means to deploy bots gain an unfair advantage, further marginalizing independent artists who lack such resources.
Finally, the proliferation of artificial streams undermines the integrity and credibility of streaming platforms themselves. If users begin to suspect that play counts are routinely manipulated, their trust in the platforms as fair arbiters of musical success will diminish. This can lead to user churn, decreased engagement, and a reluctance to subscribe, ultimately harming the platforms’ business models and their ability to sustain the ecosystem that supports artists. The battle against artificial streams is not just about preventing fraud; it’s about preserving the fundamental fairness and authenticity that should underpin the relationship between artists, platforms, and listeners.
The rise of AI in music creation and its challenges
The advent of artificial intelligence has revolutionized music creation, offering unprecedented tools for artists to compose, produce, and experiment. AI can generate melodies, harmonies, rhythms, and even entire songs, enabling new forms of creativity and accessibility. However, this powerful technology also presents significant challenges, particularly when it comes to distinguishing between human-made and AI-generated content, and preventing its misuse for fraudulent purposes like using bots to listen music. The Michael Smith case exemplifies this duality, showcasing how AI can be leveraged not just for creative output, but also as a component in large-scale fraud.
One of the most pressing challenges is the increasing difficulty for listeners to differentiate between “real” music and AI-generated tracks. Data from the French streaming platform Deezer, as cited by The Guardian, reveals that a staggering 97% of users cannot tell the difference between human-created and AI-generated music. This blurring of lines creates fertile ground for fraudsters, as AI-generated songs can be produced en masse without the traditional costs and time associated with human composition. Platforms like Suno, an AI specialized in music creation, reportedly generate 7 million songs per day, illustrating the immense volume of content that can now be produced algorithmically. This deluge of AI-generated music, some of which may be intended for legitimate use, also provides an endless supply of material for those looking to populate fake artist profiles for stream manipulation.
Beyond entire songs, AI poses threats through “deepfakes” of artists’ voices. These sophisticated AI models can mimic the vocal styles of existing artists, creating new songs or even entire albums that sound as if they were performed by the original artist, but without their consent or involvement. This raises complex issues of copyright, artistic integrity, and intellectual property. Artists face the daunting prospect of their voices and styles being exploited without compensation or control, creating a legal and ethical minefield that the industry is still struggling to navigate. The challenge for platforms and regulators is to develop robust frameworks that encourage legitimate AI innovation while effectively combating its misuse for fraud and unauthorized replication.
How platforms are combating fraudulent streaming and AI abuse
Recognizing the existential threat posed by fraudulent streaming and the misuse of AI, major music platforms are actively implementing new policies and investing in advanced technologies to combat these issues. Their efforts are multi-pronged, focusing on detection, prevention, and enforcement to safeguard the integrity of their ecosystems and protect legitimate artists from schemes like using bots to listen music.
Spotify, one of the largest streaming services, has been particularly vocal about its commitment to fighting artificial streams. The company has rolled out new policies that explicitly prohibit “impersonation” and require “common AI disclosures in music credits.” This means that content creators are increasingly expected to be transparent about their use of AI in music production. Furthermore, Spotify has publicly stated its significant investment “in detecting, preventing, and removing the impact on royalties from artificial streams.” This involves employing sophisticated algorithms and machine learning models trained to identify patterns indicative of bot activity, such as unusual spikes in play counts, repetitive listening behaviors from single accounts, or plays originating from suspicious IP addresses. When artificial streams are detected, platforms typically remove the fraudulent plays, adjust royalty payouts, and in severe cases, remove the offending content and ban the associated accounts.
Other platforms are also stepping up their game. Deezer, for instance, has been at the forefront of researching the impact of AI on music consumption and has highlighted the difficulty users have in distinguishing AI-generated content. Such insights inform their efforts to develop better detection mechanisms. The industry as a whole is exploring collaborative solutions, sharing data and best practices to stay ahead of increasingly sophisticated fraudsters. This includes working with rights holders, anti-piracy organizations, and legal bodies to strengthen enforcement and ensure that perpetrators like Michael Smith face appropriate legal consequences. The goal is not just to react to fraud but to proactively build systems that deter it, ensuring that the vast majority of streams on their platforms represent genuine listener engagement and that royalties flow to the artists who truly earn them.
Ethical and legal implications of using bots for music streams
The practice of using bots to listen music carries profound ethical and legal implications that challenge the foundational principles of the music industry. Ethically, it represents a blatant act of deception and unfair competition. It undermines the meritocracy that should govern artistic success, suggesting that popularity can be bought rather than earned through talent, hard work, and genuine connection with an audience. This fraudulent activity steals potential earnings from legitimate artists, many of whom already struggle to make a living from their craft. It also erodes consumer trust, as listeners might unknowingly be exposed to music artificially boosted by bots, leading to a sense of betrayal when they realize the popularity is manufactured. This manipulation of perceived value can ultimately diminish the cultural significance of music itself, reducing it to a commodity whose worth is determined by algorithms rather than artistic merit or human appreciation.
Legally, stream manipulation falls squarely under the umbrella of fraud. Perpetrators like Michael Smith are engaging in schemes to defraud streaming platforms and rights holders by misrepresenting listening data to illicitly obtain royalty payments. This can lead to charges of wire fraud, mail fraud, and potentially other offenses depending on the jurisdiction and the specifics of the scheme. Copyright infringement also becomes a relevant issue, especially when AI is used to generate music that mimics existing artists or when copyrighted material is used without authorization within these fraudulent operations. The legal landscape is still evolving, but courts are increasingly recognizing these acts as serious crimes with significant penalties, as evidenced by Smith’s potential five-year prison sentence.
Furthermore, the ethical dilemma extends to the developers of AI music generation tools. While many are created with benevolent intentions, the ease with which they can be misused for fraudulent purposes raises questions about developer responsibility. Should there be built-in safeguards to prevent misuse? How can the industry balance innovation with the need to prevent exploitation? These questions are at the forefront of discussions among technologists, legal experts, and industry stakeholders as they grapple with establishing ethical guidelines and robust legal frameworks to govern the future of music in an AI-driven world. The core challenge is to protect the creative economy from those who seek to exploit technological advancements for illicit financial gain, ensuring that the digital music landscape remains fair and equitable for all.
The future of music streaming: A battle against bots and AI fraud
The future of music streaming is poised to be an ongoing technological arms race, with platforms continuously developing more sophisticated defenses against those attempting to defraud the system by using bots to listen music. As AI tools become more advanced and accessible, the methods used for stream manipulation will likely become increasingly complex, demanding equally sophisticated countermeasures. This battle will shape not only the technical infrastructure of streaming services but also the regulatory and ethical frameworks governing digital music.
On the detection front, we can anticipate the deployment of even more advanced AI and machine learning algorithms. These systems will be capable of analyzing vast datasets for subtle anomalies, identifying patterns of bot behavior that mimic human activity more closely, and even predicting potential fraudulent schemes before they scale. Behavioral biometrics, network analysis, and even forensic analysis of audio files for AI markers could become standard tools. Platforms might also leverage blockchain technology to create more transparent and immutable records of streams and royalty distributions, making it harder to falsify data.
From a policy perspective, there will likely be increased collaboration among streaming platforms, record labels, and government agencies to establish unified standards for content authenticity and anti-fraud measures. This could include stricter verification processes for artists uploading music, mandatory AI disclosure requirements, and more severe penalties for those caught engaging in stream manipulation. The industry might also explore alternative royalty distribution models, moving away from the purely pro-rata system to a more “user-centric” model, where each subscriber’s fees are distributed only to the artists they actually listen to. While complex to implement, such models could significantly diminish the incentive for artificial streaming by directly linking revenue to genuine listener engagement.
Ultimately, the battle against bots and AI fraud in music streaming is a fight for authenticity and fairness. Success will depend on a combination of technological innovation, robust legal enforcement, and a collective commitment from all stakeholders—platforms, artists, and listeners—to uphold the integrity of the digital music ecosystem. The goal is to ensure that streaming remains a viable and equitable means for artists to share their creations and for fans to discover and enjoy music, free from the shadow of manipulation.
What can artists and consumers do to protect the integrity of music streaming?
Protecting the integrity of music streaming from fraudulent activities like using bots to listen music is a shared responsibility that requires active participation from both artists and consumers. While platforms invest heavily in detection and prevention, individual actions can significantly contribute to a healthier and more equitable digital music ecosystem.
For artists, vigilance and adherence to ethical practices are paramount. Firstly, artists should always upload their music through legitimate distributors and avoid any third-party services that promise “guaranteed streams” or offer to boost play counts for a fee, as these are often tied to bot networks. Secondly, artists should monitor their streaming data for suspicious activity. Unexplained, massive spikes in plays, especially from unusual geographical locations or at odd times, could indicate that their music is being targeted by bot farms, either by fraudsters attempting to exploit their content or by malicious actors trying to get them into trouble. Reporting such anomalies to their distributor or directly to the streaming platform is crucial. Finally, artists should advocate for transparent royalty structures and support industry initiatives aimed at combating fraud and promoting fair compensation.
Consumers also play a vital role in maintaining the integrity of streaming. The most important action is to engage with music authentically. Listen to artists you genuinely enjoy, share their music, and support them through legitimate channels. If you suspect an artist’s popularity is entirely manufactured, or if you encounter suspicious accounts or unusually repetitive listening patterns, report these observations to the streaming platform. Being discerning about what you listen to and how you discover new music helps to filter out artificially boosted content. Educating oneself about the challenges artists face and understanding how the streaming economy works can empower consumers to make more informed choices that genuinely support the artists they love. By collectively valuing authentic engagement over manipulated metrics, artists and consumers can help ensure that the future of music streaming remains vibrant, fair, and truly reflective of human creativity and appreciation.








