Trading

KYC, Bitcoin, and the failed hopes of AML policies: Tracking funds on-chain

The cornerstone of the modern approach to money laundering is to prevent illicit funds from entering the financial system. The rationale is understandable: if criminals won’t be able to use their money, they will have to eventually stop whatever they are doing and go get a 9 to 5 job.

However, after 20 years of ever tighter (and ever more expensive) AML regulations, the levels of organized crime, tax evasion, or drug use do not show any signs of decrease. At the same time, the basic right to privacy is being unceremoniously violated on an everyday basis, with each financial operation, no matter how tiny, being subject to extensive verifications and tons of paperwork. Check Part 1 of this story for details and numbers.

This prompts a question: should we reconsider our approach to the AML strategy?

Two years ago, a fintech author David G.W. Birch wrote an article for Forbes, reflecting on the main principle of AML – gatekeeping. The key thought could be resumed as “instead of trying to prevent criminals from getting into the system, we let them in and monitor what they are up to.”

Indeed, why do we erect expensive AML gates and force the bad guys to turn to hardly traceable cash or works of art, while we can simply let them in and follow the money to hunt them down? To do so, we can use both the existing reporting system within traditional finance and the on-chain analytics within the blockchain. However, while the former is more or less understandable, the latter is still a mystery for most people. What’s more, politicians and bankers regularly accuse crypto of being a tool for criminals, tax evaders, and all sorts of Satan worshipers, further exacerbating the misunderstanding.

To shed more light on this matter, we need to better understand how on-chain analytics works. It is not an obvious task though: blockchain analysis methods are often proprietary and analytics companies sharing them could risk losing their business edge. However, some of them, like Chainalysis, publish rather detailed documentation, while the Luxembourgish firm Scorechain agreed to share some details of their trade for this story. Combining this data can give us a good idea of the potential and limitations of on-chain analytics.

How does on-chain analytics work?

The blockchain is transparent and auditable by anyone. However, not everyone is capable of drawing meaningful conclusions from the myriads of datasets it is composed of. Gathering data, identifying the entities, and putting the conclusions into a readable format is the specialty of on-chain analytic firms.

It all starts with getting a copy of the ledger, i.e. synchronizing the internal software with the blockchains.

Then, a tedious stage of mapping begins. How can we know that this address belongs to an exchange, and this one – to a darknet marketplace? Analysts employ all their creativity and resourcefulness to try and de-pseudonymize the blockchain as much as they can. Any technique is good as long as it works: collecting open-source data from law enforcement, scraping websites, navigating Twitter-X and other social media, acquiring data from specialized blockchain explorers like Etherscan, following the trace of stolen funds upon requests from attorneys… Some services are identified by interacting with them, i.e. sending funds to centralized exchanges to identify their addresses. To reduce the errors, the data is often cross-checked with different sources.

Once the addresses are identified to the best of one’s ability, one can see a bit clearer in the maze of transaction hashes. Yet, the picture is still far from complete. If for account-based blockchains like Ethereum identifying an address allows tracking its funds in a rather straightforward manner, for UTXO blockchains like Bitcoin, the situation is much less obvious.

Indeed, unlike Ethereum, which keeps track of addresses, Bitcoin blockchain keeps track of the unspent transaction outputs (UTXO). Each transaction always sends all the coins associated with an address. If a person wishes to spend only a part of their coins, the unspent part, also known as change, is assigned to a newly created address controlled by the sender.

It is the job of on-chain analytics firms to make sense of these movements and determine clusters of UTXO associated with the same entity.

Can on-chain analytics be trusted?

On-chain analytics is not an exact science. Both the mapping and the clustering of UTXO rely on experience and a carefully calibrated set of heuristics each company has developed for itself.

This issue was highlighted last July in the court hearing involving Chainalysis, which had provided its forensic expertise in the US v Sterlingov case. The firm’s representative admitted that not only its methods were not peer-reviewed or otherwise scientifically validated, but also the firm did not keep track of its false positives. In Chainalysis defense, the first point is understandable: the methods that each firm uses to analyze the blockchain are closely guarded trade secrets. However, the issue of false positives must be tackled better, especially if it could end up sending someone to jail.

Scorechain uses a different approach, erring on the side of caution and only choosing the methods that do not generate false positives in the clustering process, such as the multi-input heuristics (assumption that in a single transaction all input addresses come from one entity). Unlike Chainalysis, they do not use any change heuristics, which produce a lot of false positives. In some cases, their team can manually track UTXOs if a human operator has enough reasons to do so, but overall, this approach tolerates blind spots, counting on the additional information in the future that would fill them in.

The very notion of heuristics – i.e. strategies that employ a practical but not necessarily scientifically proven approach to problem-solving – implies that it cannot guarantee 100% reliability. It is the outcome that measures its effectiveness. The FBI stating that Chainalysis’ methods are “generally reliable” could serve as proof of quality, but it would be better if all on-chain analytics firms could start measuring and sharing their rates of false positives and false negatives.

Seeing through the fog

There are ways of obfuscating the trace of funds or making them more difficult to find. Crypto hackers and scammers are known to use all kinds of techniques: chain hopping, privacy blockchains, mixers…

Some of them, like swapping or bridging assets, can be traced by on-chain analytics firms. Others, like the privacy chain Monero, or various mixers and tumblers, often can’t. There were, however, instances when Chainalysis claimed to de-mix transactions passed through a mixer, and most recently Finnish authorities announced that they have tracked Monero transactions as part of an investigation.

In any case, the very fact of having used these masking techniques is very much visible and can serve as a red flag for any AML purposes. The US Treasury adding last year the smart contract address of Tornado Cash mixer to the OFAC list is one such example. Now, when the coins’ history is traced down to this mixer, the funds are suspected of belonging to illicit actors. This is not great news for privacy advocates, but rather reassuring for crypto AML.

One might ask what’s the point of flagging the mixed coins and tracing them across blockchains if we don’t have a concrete person to pin them to, like in the banking system? Luckily, criminals have to interact with the non-criminal world, and the tainted money sooner or later ends up either at goods or service providers, or at a bank account, and this is where law enforcement can identify the actual persons. This is how the FBI got its biggest-ever seizure of $4.5 billion worth of Bitcoin (in 2022 prices) following the Bitfinex hack. This also works in reverse: if law enforcement gets access to a criminal’s private keys, they can move up the blockchain history to identify the addresses that had interacted with it at some point. This is how the London Metropolitan Police uncovered a whole drug dealing network from one single arrest (source: Chainalysis’ Crypto Crime 2023 report).

Crime has existed since the dawn of humanity, and will probably accompany it till its end, using ever-evolving camouflaging techniques. Luckily, crime detection methods follow suit, and it happens that the blockchain is an ideal environment for deploying digital forensics tools. After all, it is transparent and accessible to everyone (which by the way cannot be said about the banking sector).

One can argue that current on-chain analysis methods need to be improved – and that point holds true. However, it is clear that even in this imperfect form it is already an efficient tool for tracking bad guys on-chain. Perhaps, then, it’s time to reconsider our approach to AML and let the criminals into the blockchain?

A special thank you to the Scorechain team for sharing their knowledge.

This is a guest post by Marie Poteriaieva. Opinions expressed are entirely their own and do not necessarily reflect those of BTC Inc or Bitcoin Magazine.

Random Access Markets: The Free Market Of Information

This article is featured in Bitcoin Magazine’s “The Inscription Issue”. Click here to get your Annual Bitcoin Magazine Subscription.

Click here to download a PDF of this article.

The Value Of Bits

Data is the most liquid commodity market in the world. In the smartphone era, unless extreme precautions are taken, everywhere you go, everything you say, and everything you consume is quantifiable among the infinite spectrum of the information goods markets. Information goods, being inherently nonphysical bits of data, can be conceptualized, crafted, produced, or manufactured, disseminated, and consumed exclusively as digital entities. The internet, along with other digital technologies for computation and communication, serves as a comprehensive e-commerce infrastructure, facilitating the entire life cycle of designing, producing, distributing, and consuming a wide array of information goods. The seamless transition of existing information goods from traditional formats to digital formats is easily achievable, not to mention the collection of media formats completely infeasible in the analog world.

A preliminary examination of products within the information goods industry reveals that, while they all exist as pure information products and are uniformly impacted by technological advancements, their respective markets undergo distinct economic transformation processes. These variations in market evolution are inherently tied to differences in product characteristics, production methods, distribution channels, and consumption patterns. Notably, the separation of value creation and revenue processes introduces opportunistic scenarios, potentially leaving established market players with unprofitable customer bases and costly yet diminishing value-creation processes.

Simultaneously, novel organizational architectures may emerge in response to evolving technological conditions, effectively creating and destroying traditional information good markets overnight. The value chains, originally conceived under the assumptions of the traditional information goods economy, undergo radical redesigns as new strategies and tooling materialize in response to the transformative influence of digital production, distribution, and consumption on conventional value propositions for data. For example, mass surveillance was never practical when creating even a single photo meant hours of labor within a specialized photo development room with specific chemical and lightning conditions. Now that there is a camera on every corner, a microphone in every pocket, a ledger entry for every financial transaction, and the means to transmit said data essentially for free across the planet, the market conditions for mass surveillance have unsurprisingly given rise to mass surveillance as a service.

An entirely new industry of “location firms” has grown, with The Markup having demarcated nearly 50 companies selling location data as a service in a 2021 article titled “There’s a Multibillion-Dollar Market for Your Phone’s Location Data” by Keegan and Ng. One such firm, Near, is self-described as curating “one of the world’s largest sources of intelligence on People and Places”, having gathered data representing nearly two billion people across 44 countries. According to a Grand View Research report titled “Location Intelligence Market Size And Share Report, 2030”, the global location intelligence data market cap was worth an estimated “$16.09 billion in 2022 and is projected to grow at a compound annual growth rate (CAGR) of 15.6% from 2023 to 2030”. The market cap of this new information goods industry is mainly “driven by the growing penetration of smart devices and increasing investments in IoT [internet of things] and network services as it facilitates smarter applications and better network connectivity”, giving credence to the idea that technological advancement front-runs network growth which front-runs entirely new forms of e-commerce markets. This, of course, was accelerated by the COVID-19 pandemic, in which government policies resulted in “the increased adoption of location intelligence solutions to manage the changing business scenario as it helps businesses to analyze, map, and share data in terms of the location of their customers”, under the guise of user and societal health.

Within any information goods market, there are only two possible outcomes for market participants: distributing the acquired data or keeping it for yourself.


Click the image above to subscribe!

The Modern Information Goods Market

In the fall of 2021, China launched the Shanghai Data Exchange (SDE) in an attempt to create a state-owned monopoly on a novel speculative commodities market for data scraped from one of the most digitally surveilled populations on the planet. The SDE offered 20 data products at launch, including customer flight information from China Eastern Airlines, as well as data from telecommunications network operators such as China Unicom, China Telecom, and China Mobile. Notably, one of the first known trades made at the SDE was the Commercial Bank of China purchasing data from the state-owned Shanghai Municipal Electric Power Company under the guise of improving their financial services and product offerings.

Shortly before the founding of this data exchange, Huang Qifan, the former mayor of Chongqing, was quoted saying that “the state should monopolize the rights to regulate data and run data exchanges”, while also suggesting that the CCP should be highly selective in setting up data exchanges. “Like stock exchanges, Beijing, Shanghai and Shenzhen can have one, but a general provincial capital city or a municipal city should not have it.”

While the current information goods market has led to such innovations such as speculation on the purchasing of troves of user data, the modern data market was started in earnest at the end of the 1970s, exemplified in the formation of Oracle Corporation in 1977, named after the CIA’s “Project Oracle”, which featured eventual Oracle Corporation co-founders Larry Ellison, Robert Miner, and Ed Oates. The CIA was their first customer, and in 2002, nearly $2.5 billion worth of contracts came from selling software to federal, state, and local governments, accounting for nearly a quarter of their total revenue. Only a few months after September 11, 2001, Ellison penned an op-ed for The New York Times titled “A Single National Security Database” in which the opening paragraph reads “The single greatest step we Americans could take to make life tougher for terrorists would be to ensure that all the information in myriad government databases was copied into a single, comprehensive national security database”. Ellison was quoted in Jeffrey Rosen’s book The Naked Crowd as saying “The Oracle database is used to keep track of basically everything. The information about your banks, your checking balance, your savings balance, is stored in an Oracle database. Your airline reservation is stored in an Oracle database. What books you bought on Amazon is stored in an Oracle database. Your profile on Yahoo! is stored in an Oracle database”. Rosen made note of a discussion with David Carney, a former top-three employee at the CIA, who, after 32 years of service at the agency, left to join Oracle just two months after 9/11 to lead its Information Assurance Center:

“How do you say this without sounding callous?” [Carney] asked. “In some ways, 9/11 made business a bit easier. Previous to 9/11 you pretty much had to hype the threat and the problem.” Carney said that the summer before the attacks, leaders in the public and private sectors wouldn’t sit still for a briefing. Then his face brightened. “Now they clamor for it!”

This relationship has continued for 20 years, and in November 2022, the CIA awarded its Commercial Cloud Enterprise contract to five American companies — Amazon Web Services, Microsoft, Google, IBM, and Oracle. While the CIA did not disclose the exact value of the contract, documents released in 2019 suggested it could be “tens of billions” of dollars over the next 15 years. Unfortunately, this is far from the only data market integration of the private sector, government agencies, and the intelligence community, perhaps best exemplified by data broker LexisNexis.

LexisNexis was founded in 1970, and is, as of 2006, the world’s largest electronic database for legal and public-records-related information. According to their own website, LexisNexis describes themselves as delivering “a comprehensive suite of solutions to arm government agencies with superior data, technology and analytics to support mission success”. LexisNexis consists of nine board members: CEO Haywood Talcove; Dr. Richard Tubb, the longest serving White House physician in U.S. history; Stacia Hylton, former Deputy Director of the U.S. Marshal Service; Brian Stafford, former Director of the U.S. Secret Service; Lee Rivas, CEO for the public sector and health care business units of LexisNexis Risk Solutions; Howard Safir, former NYPD Commissioner and Associate Director of Operations for the U.S. Marshals Service; Floyd Clarke, former Director of the FBI; Henry Udow, Chief Legal Officer and Company Secretary for the RELX Group; and lastly Alan Wade, retired Chief Information Officer for the CIA.

While Wade was still employed by the CIA, he founded Chiliad with Christine Maxwell, sister of Ghislaine Maxwell, and daughter of Robert Maxwell. Christine Maxwell is considered “an early internet pioneer”, having founded Magellan in 1993, one of the premier search engines on the internet. After selling Magellan to Excite, she reinvested her substantial windfall into another big data search technology company: the aforementioned Chiliad. According to a 2020 report by OYE.NEWS, Chiliad made use of “on-demand, massively scalable, intelligent mining of structured and unstructured data through the use of natural language search technologies”, with the firm’s proprietary software being “behind the data search technology used by the FBI’s counterterrorism data warehouse”.

As recently as November 2023, the Wade-connected LexisNexis was given a $16-million, five-year contract with the U.S. Customs and Border Protection “for access to a powerful suite of surveillance tools”, according to available public records, providing access to “social media monitoring, web data such as email addresses and IP address locations, real-time jail booking data, facial recognition services, and cell phone geolocation data analysis tools”. Unfortunately, this is far from the only government agency to utilize LexisNexis’ data brokerage with the aims of circumnavigating constitutional law and civil liberties in regards to surveillance.

In the fall of 2020, LexisNexis was forced to settle for over $5 million after a class action lawsuit alleged the broker sold Department of Motor Vehicle data to U.S. law firms, who were then free to use it for their own business purposes. “Defendants websites allow the purchase of crash reports by report date, location, or driver name and payment by credit card, prepaid bulk accounts or monthly accounts”, the complaint reads. “Purchasers are not required to establish any permissible use provided in the DPPA to obtain access to Plaintiffs’ and Class Members’ MVRs”. In the summer of 2022, a Freedom of Information Act request revealed a $22 million contract between Immigration and Customs Enforcement and LexisNexis. Sejal Zota, a director at Just Futures Law and a practicing attorney working on the lawsuit, made note that LexisNexis makes it possible for ICE to “instantly access sensitive personal data — all without warrants, subpoenas, any privacy safeguards or any show of reasonableness”.

In the aforementioned complaint from 2022, the use of LexisNexis’ Accurint product allows “law enforcement officers [to] surveil and track people based on information these officers would not, in many cases, otherwise be able to obtain without a subpoena, court order, or other legal process…enabling a massive surveillance state with files on almost every adult U.S. consumer”.

A Series Of Tubes

In 2013, it came to the public’s attention that the National Security Agency had covertly breached the primary communication links connecting Yahoo and Google data centers worldwide. This information was based on documents published by WikiLeaks, originally obtained from former NSA contractor Edward Snowden, and corroborated by interviews of government officials.

As per a classified report dated January 9, 2013, the NSA transmits millions of records daily from internal Yahoo and Google networks to data repositories at the agency’s Fort Meade, Maryland headquarters. In the preceding month, field collectors processed and returned 181,280,466 new records, encompassing “metadata” revealing details about the senders and recipients of emails, along with time stamps, as well as the actual content, including text, audio, and video data.

The primary tool employed by the NSA to exploit these data links is a project named MUSCULAR, carried out in collaboration with the British Government Communications Headquarters (GCHQ). Operating from undisclosed interception points, the NSA and GCHQ copy entire data streams through fiber-optic cables connecting the data centers of major Silicon Valley corporations.

This becomes particularly perplexing when considering that, as revealed by a classified document acquired by The Washington Post in 2013, both the NSA and the FBI were already actively tapping into the central servers of nine prominent U.S. internet companies. This covert operation involved extracting audio and video chats, photographs, emails, documents, and connection logs, providing analysts with the means to monitor foreign targets. The method of extraction, as outlined in the document, involves direct collection from the servers of major U.S. service providers: Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, and Apple.

During the same period, the newspaper The Guardian reported that GCHQ — the British counterpart to the NSA — was clandestinely gathering intelligence from these internet companies through a collaborative effort with the NSA. According to documents obtained by The Guardian, the PRISM program seemingly allows GCHQ to bypass the formal legal procedures required in Britain to request personal materials such as emails, photos, and videos, from internet companies based outside the country.

PRISM emerged in 2007 as a successor to President George W. Bush’s secret program of warrantless domestic surveillance, following revelations from the news media, lawsuits, and interventions by the Foreign Intelligence Surveillance Court. Congress responded with the Protect America Act in 2007 and the FISA Amendments Act of 2008, providing legal immunity to private companies cooperating voluntarily with U.S. intelligence collection. Microsoft became PRISM’s inaugural partner, marking the beginning of years of extensive data collection beneath the surface of a heated national discourse on surveillance and privacy.

In a June 2013 statement, then-Director of National Intelligence James R. Clapper said “information collected under this program is among the most important and valuable foreign intelligence information we collect, and is used to protect our nation from a wide variety of threats. The unauthorized disclosure of information about this important and entirely legal program is reprehensible and risks important protections for the security of Americans”.

So why the need for collection directly from fiber optic cables if these private companies themselves are already providing data to the national intelligence community? Upon further inquiry into the aforementioned data brokers to the NSA and CIA, it would appear that a vast majority of the new submarine fiber optic cables — essential infrastructure to the actualization of the internet as a global data market — are being built out by these same private companies. These inconspicuous cables weave across the global ocean floor, transporting 95-99% of international data through bundles of fiber-optic strands scarcely thicker than a standard garden hose. In total, the active network comprises over 1,100,000 kilometers of submarine cables.

Traditionally, these cables have been owned by a consortium of private companies, primarily telecom providers. However, a notable shift has emerged. In 2016, a significant surge in submarine cable development began, and notably, this time, the purchasers are content providers — particularly the data brokers Meta/Facebook, Google, Microsoft, and Amazon. Of note is Google, having acquired over 100,000 kilometers of submarine cables. With the completion of the Curie Cable in 2019, Google’s ownership of submarine cables globally stands at 1.4%, as measured by length. When factoring in cables with shared ownership, Google’s overall share increases to approximately 8.5%. Facebook is shortly behind with 92,000 kilometers, with Amazon at 30,000, and Microsoft with around 6,500 kilometers from the partially owned MAREA cable.

There is a notable revival in the undersea cable sector, primarily fueled by investments from Facebook and Google, accounting for around 80% of 2018-2020 investments in transatlantic connections — a significant increase from the less than 20% they accounted for in the preceding three years through 2017, as reported by TeleGeography. This wave of digital giants has fundamentally transformed the dynamics of the industry. Unlike traditional practices where phone companies established dedicated ventures for cable construction, often connecting England to the U.S. for voice calls and limited data traffic, these internet companies now wield considerable influence. They can dictate the cable landing locations, strategically placing them near their data centers, and have the flexibility to modify the line structures — typically costing around $200 million for a transatlantic link — without waiting for partner approvals. These technology behemoths aim to capitalize on the increasing demand for rapid data transfers essential for various applications, including streaming movies, social messaging, and even telemedicine.

The last time we saw such an explosion of activity in building out essential internet infrastructure was during the dot-com boom of the 1990s, in which phone companies spent over $20 billion to install fiber-optic lines beneath the oceans, immediately before the massive proliferation of personal computers, home internet modems, and peer-to-peer data networks.

Data Laundering

The birthing of new compression technologies in the form of digital media formats itself would not have given rise to the panopticon we currently operate under without the ability to obfuscate mass uploading and downloading of this newly created data via the ISP rails of both public and private sector infrastructure companies. There is likely no accident that the creation of these tools, networks, and algorithms were created under the influence of national intelligence agencies right before the turn of the millennium, the rise of broadband internet, and the sweeping unconstitutional spying on citizens made legal via the Patriot Act in the aftermath of the events on September 11, 2001.

Only 15 years old, Sean Parker, the eventual founder of Napster and first president of Facebook — a former DARPA project titled LifeLog — caught the gaze of the FBI for his hacking exploits, ending in state-appointed community ser­vice. One year later, Parker was recruited by the CIA after winning a Virginia state computer science fair by developing an early internet crawling application. Instead of continuing his studies, he interned for a D.C. startup, FreeLoader, and eventually UUNet, an internet service provider. “I wasn’t going to school,” Parker told Forbes. “I was technically in a co-op program but in truth was just going to work.” Parker made nearly six figures his senior year of high school, eventually starting the peer-to-peer music-sharing site that became Napster in 1999. While working on Napster, Parker met investor Ron Conway, who has backed every Parker product since, having also previously backed PayPal, Google, and Twitter, among others. Napster has been credited as one of the fastest-growing businesses of all time, and its influence on information goods and data markets in the internet age cannot be overstated.

In a study conducted between April 2000 and November 2001 by Sandvine titled “Peer-to-peer File Sharing: The Impact of File Sharing on Service Provider Networks”, network measurements revealed a notable shift in bandwidth consumption patterns due to the launch of new peer-to-peer tooling, as well as new compression algorithms such as .MP3. Specifically, the percentage of network bandwidth attributed to Napster traffic saw an increase from 23% to 30%, whereas web-related traffic experienced a slight decrease from 20% to 19%. By 2002, observations indicated that file-sharing traffic was consuming a substantial portion, up to 60%, of internet service providers’ bandwidth. The creation of new information good markets comes downstream of new technological capabilities, with implications on the scope and scale of current data stream proliferation, clearly noticeable within the domination of internet user activity belonging to peer-to-peer network communications.

Of course, peer-to-peer technology did not cease to advance after Napster, and the invention of “swarms”, a style of downloading and uploading essential to the development of Bram Cohen’s BitTorrent, were invented for eDonkey2000 by Jed McCaleb — the eventual founder of Mt.Gox, Ripple Labs, and the Stellar Foundation. The proliferation of advanced packet exchange over the internet has led to entirely new types of information good markets, essentially boiling down to three main axioms; public and permanent data, selectively private data, and coveted but difficult-to-obtain data.


Click the image above to download a PDF of the article. 

Bitcoin-native Data Markets

Parent/Child Recursive Inscriptions

While publishing directly to Bitcoin is hardly a new phenomenon, the popularization of Ord — released by Bitcoin developer Casey Rodarmor in 2022 — has led to a massive increase in interest and activity in Bitcoin-native publishing. While certainly some of this can be attributed to a newly formed artistic culture siphoning away activity and value from Ethereum — and other alternative businesses making erroneous claims of blockchain-native publishing — the majority of this volume comes downstream from the construction of these inscription transactions that use the SegWit discount via specially authored Taproot script, and the awareness of the immutability, durability, and availability of data offered solely by the Bitcoin blockchain. The SegWit discount was specifically created to incentivize the consolidation of unspent transaction outputs and limit the creation of excessive change in the UTXO set, but as for its implications on Bitcoin-native publishing, it has essentially created a substantial 75% markdown on the cost of bits within a block that are stuffed with arbitrary data within an inscription. This is far from a non-factor in the creation of a sustainable information goods market.

Taking this one step further, the implementation of a self-referential inscription mechanism allows users to string data publishing across multiple Bitcoin blocks, limiting the costs from fitting a file into a single block auction. This implies both the ability to inscribe files beyond 4 MB, as well as the utility to reference previously inscribed material, such as executable software, code for generative art, or the image assets themselves. In the case of the recent Project Spartacus, recursive inscriptions that use what is known as a parent inscription were used in order to allow essentially a crowdfunding mechanism in order to publicly source the satoshis needed to publish the Afghan War logs onto the Bitcoin blockchain forever. This solves for the need of public and permanent publishing of known and available data by a pseudonymous set of users, but requires certain data availability during the minting process itself, which opens the door to centralized pressure points and potential censoring of inscription transactions within a public mint by nefarious mining pools.

Precursive Inscriptions

With the advent of Bitcoin-native inscriptions, the possibility of immutable, durable, and censorship-reduced publishing has come to fruition. The current iteration of inscription technology allows for users to post their data via a permanent but publicly propagated Bitcoin transaction. However, this reality has led to yet-to-be confirmed inscription transactions and their associated data being noticed while within the mempool itself. This issue can be mitigated by introducing encryption within the inscription process, leaving encrypted but otherwise innocuous data to be propagated by Bitcoin nodes and eventually published by Bitcoin miners, but with no ability to be censored due to content. This also removes the ability for inscriptions meant for speculation to be front-run by malicious collectors who pull inscription data from the mempool and rebroadcast it at an increased fee rate in order to be confirmed sooner.

Precursive inscriptions aim to create the private, encrypted publishing of data spread out over multiple Bitcoin blocks that can be published at a whim via a recursive publishing transaction containing the private key to decrypt the previously inscribed data. For instance, a collective of whistleblowers could discreetly upload data to the Bitcoin blockchain, unbeknownst to miners or node runners, while deferring its publication until a preferred moment. Since the data will be encrypted during its initial inscribing phase, and since the data will be seemingly uncorrelated until it is recursively associated by the publishing transaction, a user can continually resign and propagate the time-locked parent inscription for extended durations of time. If the user cannot sign a further time-locked publishing transaction due to incarceration, the propagated publishing transaction will be confirmed after the time-lock period ends, thus giving the publisher a dead man’s switch mechanism.

The specially authored precursive inscription process presented in this article offers a novel approach to secure and censorship-resistant data publishing within the Bitcoin blockchain. By leveraging the inherent characteristics of the Bitcoin network, such as its decentralized and immutable nature, the method described here addresses several key challenges in the field of information goods, data inscription, and dissemination. The primary objective of precursive inscriptions is to enhance the security and privacy of data stored on the Bitcoin blockchain, while also mitigating the risk of premature disclosure. One of the most significant advantages of this approach is its ability to ensure that the content remains concealed until the user decides to reveal it. This process not only provides data security but also maintains data integrity and permanence within the Bitcoin blockchain.

This leads us to the third and final fork of the information good data markets needed for the modern age; setting the price for wanted but currently unobtained bits.

ReQuest

ReQuest aims to create a novel data market allowing users to issue bounties for coveted data, seeking the secure and immutable storage of specific information on the Bitcoin blockchain. The primary bounty serves a dual role by covering publishing costs and rewarding those who successfully fulfill the request. Additionally, the protocol allows for the increase of bounties through contributions from other users, increasing the chances of successful fulfillment. Following an inscription submission, users who initiated the bounty can participate in a social validation process to verify the accuracy of the inscribed data.

Implementing this concept involves a combination of social vetting to ensure data accuracy, evaluating contributions to the bounty, and adhering to specific contractual parameters measured in byte size. The bounty fulfillment process requires eligible fulfillers to submit their inscription transaction hash or a live magnet link for consideration. In cases where the desired data is available but not natively published on Bitcoin — or widely known but currently unavailable, such as a renowned .STL file or a software client update — the protocol offers an alternative method to social consensus for fulfillment, involving hashing the file and verifying the resulting SHA-256 output, which provides a foolproof means of meeting the bounty’s requirements. The collaborative nature of these bounties, coupled with their ability to encompass various data types, ensures that ReQuest’s model can effectively address a broad spectrum of information needs in the market.

For ReQuest bounties involving large file sizes unsuitable for direct inscription on the Bitcoin blockchain, an alternative architecture known as Durabit has been proposed, in which a BitTorrent magnet link is inscribed and its seeding is maintained through a Bitcoin-native, time-locked incentive structure.

Durabit

Durabit aims to incentivize durable, large data distribution in the information age. Through time-locked Bitcoin transactions and the use of magnet links published directly within Bitcoin blocks, Durabit encourages active long-term seeding while even helping to offset initial operational costs. As the bounty escalates, it becomes increasingly attractive for users to participate, creating a self-sustaining incentive structure for content distribution. The Durabit protocol escalates the bounty payouts to provide a sustained incentive for data seeding. This is done not by increasing rewards in satoshi terms, but rather by increasing the epoch length between payouts exponentially, leveraging the assumed long-term price increase due to deflationary economic policy in order to keep initial distribution costs low. Durabit has the potential to architect a specific type of information goods market via monetized file sharing and further integrate Bitcoin into the decades-long, peer-to-peer revolution.

These novel information good markets actualized by new Bitcon-native tooling can potentially reframe the fight for publishing, finding, and upholding data as the public square continues to erode.

Increasing The Cost Of Conspiracy

The information war is fought on two fronts; the architecture that incentivizes durable and immutable public data publishing, and the disincentivization of the large-scale gathering of personal data — often sold back to us in the form of specialized commercial content or surveilled by intelligence to aid in targeted propaganda, psychological operations, and the restriction of dissident narratives and publishers. The conveniences offered by walled garden apps and the private-sector-in-name-only networks are presented in order to access troves of metadata from real users. While user metrics can be inflated, the data gleaned from these bots are completely useless to data harvesting commercial applications such as Language Learning Models (LLMs) and current applicable AI interfaces.

There are two axioms in which these algorithms necessitate verifiable data; the authenticity of the model’s code itself, and the selected input it inevitably parses. As for the protocol itself, in order to ensure replicability of desired features and mitigate any harmful adversarial functionality, techniques such as hashing previously audited code upon publishing state updates could be utilized. Dealing with the input of these LLMs’ learning fodder is seemingly also two-pronged; cryptographic sovereignty over that data which is actually valuable to the open market, and the active jamming of signal fidelity with data-chaff. It is perhaps not realistic to expect your everyday person to run noise-generating APIs that constantly feed the farmed, public datasets with heaps of lossy data, causing a data-driven feedback on these self-learning algorithms. But by creating alternative data structures and markets, built to the qualities of the specific “information good”, we can perhaps incentivize — at least subsidize — the perceived economic cost of everyday people giving up their convenience. The trend of deflation of publishing costs via digital and the interconnectivity of the internet has made it all the more essential for everyday people to at least take back control of their own metadata.

It is not simply data that is the new commodity of the digital age, but your data: where you have been, what you have purchased, who you talk to, and the many manipulated whys that can be triangulated from the aforementioned wheres, whats, and whos. By mitigating the access to this data via obfuscation methods such as using VPNs, transacting with private payment tools, and choosing hardware powered by certain open source software, users can meaningfully increase the cost needed for data harvesting by the intelligence community and its private sector compatriots. The information age requires engaged participants, incentivized by the structures upholding and distributing the world’s data — their data — on the last remaining alcoves of the public square, as well as encouraged and active retention of our own information.

Most of the time, a random, large number represented in bits is of little value to a prospective buyer. And yet Bitcoin’s store-of-value property is derived entirely from users being able to publicly and immutably publish a signature to the blockchain, possible only from the successful keeping of a private key secret. A baselayer Bitcoin transaction fee is priced not by the amount of value transferred, but by how many bytes of space is required in a specific block to articulate all its spend restrictions, represented in sat/vbyte. Bitcoin is a database that manages to incentivize users replicating its ledger, communicating its state updates, and utilizing large swaths of energy to randomize its consensus model.

Every ten minutes, on average, another 4 MB auction.

If you want information to be free, give it a free market. 

This article is featured in Bitcoin Magazine’s “The Inscription Issue”. Click here to get your Annual Bitcoin Magazine Subscription.

Click here to download a PDF of this article.

Impending Halving Creates Chaos and Opportunity in Bitcoin Market

The below is an excerpt from a recent edition of Bitcoin Magazine Pro, Bitcoin Magazine’s premium markets newsletter. To be among the first to receive these insights and other on-chain bitcoin market analysis straight to your inbox, subscribe now.

As we get closer and closer to the impending Bitcoin halving, the combined pressures of wildly increasing demand and shrinking supply have created an unusual market, turning a historically positive omen into an explosive opportunity for profit.

The Bitcoin ETF approval has changed the face of Bitcoin as we know it. Since the SEC made its fateful decision in January, the resultant developments have caused worldwide upheaval; billions have flown into these new investment opportunities, and regulators in many countries are considering the role of Bitcoin in the financial establishment. Despite some initial setbacks, the market has comfortably hit new all-time highs, and the price has stayed in a very impressive range even despite fluctuations.

Nevertheless, we are in a very unique situation that can impact the market in unpredictable ways. Bitcoin’s next halving is set to arrive in April, and this will be the first time in its entire history that the halving will coincide with an all-time high for price. Although there have been a great deal of differences between each of the major halvings, a trend has been generally noticeable: even if there are huge steady gains, it is in the ballpark of a year to 18 months before Bitcoin breaks all records with a true price spike. One year out from the halving in June 2016, Bitcoin had more than doubled; yet a few months later, the growth was closer to 30x.

Source

There is plenty of optimism from substantial industry players, such as Standard Chartered’s bold prediction that Bitcoin’s value will more than double to $150k before the year is over. However, their analysis of the situation is not mostly based on halving trends but on the rampaging success of the Bitcoin ETF, and that success has also thrown us a curveball. As community discussion has been quick to point out, these major ETF issuers have been pouring billions into bitcoin, buying at astounding rates and amassing some of the world’s largest Bitcoin supplies practically overnight. If they collectively purchase more than even the worldwide community, how will they react when the spigot of new coins shuts to a trickle?

In other words, we are headed into a situation where demand is at an all-time high and there is insufficient supply to meet it. Business Insider called the upcoming halving a “momentous event”, considering that the ETF had made “permanent changes to Bitcoin’s underlying infrastructure.” Coinshares echoed these sentiments with the warning of a positive demand shock, as Head of Research James Butterfill claimed that “The launch of multiple spot bitcoin ETFs on January 11 has led to an average daily demand of 4500 bitcoins (trading days only), while only an average of 921 new bitcoin were minted per day.” And that’s only considering the pre-halving mining rates. The ETF issuers are already relying on secondhand Bitcoin sales to fill up their coffers, and this trend seems certain to increase in the immediate future.

Isn’t this a good thing, though? Positive demand shocks, as a rule, are generally associated with jumps in price. Additionally, even though shocks like this in critical commodities like oil can lead to inflation, Bitcoin is not yet an essential component of the entire world economy. It’s unlikely that the same drawbacks will apply just yet. In other words, the answer is generally yes, but the situation can still cause alarming trends. For example, the night of March 18 saw a truly bewildering development: coasting at highs around $70k, Bitcoin’s value on BitMEX crashed below $9k in the blink of an eye. The price recovered quickly and was, in any event, isolated to this one exchange, but it’s still an unprecedented development.

Source

BitMEX announced that the culprit of this negative price spike was a series of large sell orders in the middle of the night, and that they were investigating the activity. Several anonymous whales in particular have emerged as the likely candidates for these sales. We still have no idea who exactly they are or who was buying bitcoins at such a prodigious rate, but it’s only an example of how major selloffs can torpedo market confidence. In any event, this one episode is only a particularly sharp example of a general trend; “constant” spot selling as Bitcoin’s price receives a bloody nose. The market hit lows of $62k Tuesday afternoon, while it was nearly at $72k on the morning of the previous Friday.

Traders have nevertheless remained totally optimistic that these price dips are nothing more than the “bear trap” associated with the pre-halving environment, and they aren’t the only ones. Prominent executives including Binance CEO Richard Teng and Crypto.com CEO Kris Marszalek have endorsed the viewpoint that these kinds of price dips are a perfectly natural and temporary component of a scheduled halving. There is a clearly observable trend of substantial price dips, from 20-40%, in the weeks immediately prior to the most recent halvings. And yet, the price bounced back quickly and completely, and went on to new all-time heights.

Source

In other words, some of the recent and sudden price dives are fully explainable using data from Bitcoin’s history. The relevant questions for us, then, are whether Bitcoin’s future will follow the same line. The fact of the matter is that all the available signs point to an optimistic long-term forecast. A positive demand shock caused by ETF acquisitions and the halving may very well make it more difficult for an average consumer to buy bitcoin, but how will that difficulty manifest? Higher prices. Besides, a selling point of the ETF is that plenty of average consumers will use it to seek exposure to bitcoin’s profits, rather than direct custody. This alone will encourage ETF issuers to keep their buying pressure high. It’s impossible to say how long this market situation will continue or what it will mean for bitcoin’s use as an actual currency, but there’s nothing in the current situation to suggest that bitcoin won’t keep growing.

Is it any wonder, then, that the community is gearing up to welcome the halving with such bated breath? Prominent industry figures are taking great care to prepare “The Biggest Celebration in Bitcoin” with live coverage and meetup events in 7 countries (and counting), and the halving isn’t even expected for another month. It’s very possible that 2024 will be remembered as the year that Bitcoin truly became enmeshed in the global financial infrastructure, if stunning regulatory victories in January turn to unprecedented growth by December. Really, the major significant concern is whether or not Bitcoin will see diminished usage as a currency when its worth in fiat is so valuable. Nevertheless, the signs from right now seem quite clear: Bitcoin is set to blaze a trail into the future.

MicroStrategy’s Saylor Offered to Buy Out Shareholders Before Buying Bitcoin

Michael Saylor faced obstacles before he successfully added Bitcoin to MicroStrategy’s balance sheet in 2020. 

Speaking at the 2024 Abundance360 Summit this week, Saylor revealed how he offered to buy out MicroStrategy shareholders in a Dutch action when they moved to first acquire $250 million worth of Bitcoin. 

The comments come after Saylor and his company announced earlier this week that they had purchased approximately 9,245 bitcoins for around $623 million. This latest acquisition brings MicroStrategy’s total holdings to about 214,246 BTC.

In a video clip from the conference, Saylor explained how he offered MicroStrategy shareholders the option to tender their shares back to the company as they were buying Bitcoin.

“We announced that we would do a Dutch auction and buy back $250 million of the stock at a premium. The stock was about $121-$122. We offered to buy our shareholders out at $140. We gave 20 days to think about it.” Saylor explained.

He said his philosophy was to “buy Bitcoin” and that he was willing to do whatever it took to accumulate more. At the time, the Bitcoin was price was roughly $11,000, down from all-time highs at $20,000 set in 2017. 

Saylor’s aggressiveness in acquiring Bitcoin for MicroStrategy has made the enterprise software company one of the largest corporate holders of Bitcoin. The company currently holds over $13.7 billion worth of BTC, making Bitcoin its primary corporate treasury asset.

MicroStrategy has since financed its bitcoin purchases through debt offerings and equity issuances, even as the bitcoin market has declined, and has made clear he has no plans to sell his stash anytime soon.

KYC, Bitcoin, and the failed hopes of AML policies: Preserving individual freedom

For the past decade, the abbreviations AML and KYC have become an inextricable part of our lives. To help law enforcement track illegal funds, an increasingly constraining set of anti-money-laundering measures is being implemented across the globe. For the past two decades, it has involved extensive know-your-customer obligations for financial institutions, forced to check their clients’ identities, backgrounds, and the nature of their activities. This system, based on surveillance and the presumption of guilt, has helped the global financial system to efficiently fight criminals by cutting off their money flows.

Or has it really?

Real-life numbers tell a different story. Several independent studies have found that AML and KYC policies enable the authorities to recover less than 0.1% of criminal funds. AML efforts cost a hundred times these amounts, but more importantly, they start to threaten our basic right to privacy.

The instances of absurd demands, like the one of a French man asked to justify the origin of €0.66 he wanted to deposit, are hardly raising any eyebrows anymore. Regulators face this ridicule without blinking, all while journalists and whistleblowers continue to expose billions of dollars laundered at the highest levels of the same institutions that put their regular clients through a bureaucratic nightmare.

This suggests that sacrificing our right to privacy may not be justified by the results.

The blockchain emerging as a free value-transferring system, as opposed to the KYC-gated fiat, has given hope to many personal freedom advocates. However, the regulators’ response was to try and integrate both the acts of buying and transferring crypto into the current AML processes.

Does it mean that the blockchain has been tamed, with both the entrance and the exit sealed by the AML regulation?

Luckily, not yet. Or at least, not in every jurisdiction. For example, Switzerland, famous for its practical common sense, often allows companies to define their own risk exposure. This means that people can buy reasonable amounts of crypto without KYC.

The Swiss example could prove valuable in stopping global AML practices from spiralling out of control and bringing a surveillance state upon the world that used to be known as “free”. It is worth taking a closer look at, but first, let’s see why the traditional AML approach is failing.

KYC: the worst policy ever

Few people dare to question the effectiveness of the current AML-KYC policies: no one wants to appear on the “criminal” side of the debate. However, this debate is worth having, for our societies appear to be spending an indecent amount of money and effort on something that just does not work as intended.

As noted by the director of Europol Rob Wainwright in 2018: “The banks are spending $20 billion a year to run the compliance regime … and we are seizing 1 percent of criminal assets every year in Europe.”

This thought was developed in one of the most comprehensive studies on the effectiveness of AML, published in 2020 by Ronald Pol from La Trobe University of Melbourne. It found that “the anti-money laundering policy intervention has less than 0.1 percent impact on criminal finances, compliance costs exceed recovered criminal funds more than a hundred times over, and banks, taxpayers and ordinary citizens are penalized more than criminal enterprises.” Furthermore, “blaming banks for not “properly” implementing anti-money laundering laws is a convenient fiction. Fundamental problems may lie instead with the design of the core policy prescription itself.”

The study uses numerous sources from major countries and agencies, but its author admits it is nearly impossible to reconcile it all. Indeed, as strange as it may seem, despite billions of dollars and euros spent on AML, there is no generalized practice that could allow us to measure its effectiveness.

The reality, however, is difficult to ignore. Despite the 20 years of modern KYC practices, organized crime and drug use continue to rise. What’s more, a number of high-profile investigations have shown massive money laundering schemes happening at the very top of respected financial institutions. Crédit Suisse helping Bulgarian drug dealers, Wells Fargo (Wachovia) laundering money for the Mexican cartels, BNP Paribas facilitating operations of a Gabonese dictator… This is not to mention tax frauds initiated by the banks themselves: Danske Bank, Deutsche Bank, HSBC, and so many others have been proven guilty of scamming their countries. Yet, the regulators’ response was to tighten the rules surrounding small retail-sized transfers and create extensive red tape for average law-abiding citizens.

Why would they choose such cumbersome and inefficient measures? Perhaps the main reason here is that the organizations that define the rules are not responsible for either implementing them or for the end result. This lack of accountability could explain the increasingly absurd rules forcing financial institutions to maintain armies of compliance specialists, and regular people to jump through hoops to perform basic financial operations.

This reality is not simply frustrating; in a broader historical and political context, it reveals worrisome trends. The increasingly intrusive regulations have set up a framework allowing to efficiently filter people. This means that under the pretext of fighting terrorism, different groups can be cut off from the financial system. This includes politically exposed people, dissenting voices, homeless, non-conformists… or those involved in the crypto space.

Crypto AML

The blockchain represents a major challenge for the fiat system because of its decentralized nature. Unlike centralized banks burdened with countless AML-related verifications, blockchain nodes simply run user-agnostic code.

There’s no way a blockchain like Bitcoin could be shaped into the AML mold, however, the intermediaries, also known as VASP (virtual asset service providers), can be. Their AML duties now include two major categories: buying crypto and transferring crypto.

Transferring crypto falls under the prerogative of FATF, and most countries tend to implement this organization’s recommendations sooner or later. These recommendations include the “travel rule”, which implies that the data about the funds must “travel” together with them. Currently, FATF recommends that any fiat transfer over $1000 must be accompanied by the information on the sender and the beneficiary.

Different countries impose different thresholds for the travel rule, with $3,000 in the US, €1,000 in Germany, and €0 in France and Switzerland. The upcoming TFR regulation update will impose the mandatory KYC for every crypto transfer starting from €0 in all EU countries.

The good thing about blockchain, though, is that it does not need intermediaries for transferring value. However, it needs them for buying crypto with fiat.

The framework for buying crypto is determined by financial regulators and central banks, and this is where the countries’ traditions play an important role. In France, a highly centralized country, an array of minute regulations, on-site inspections, and conferences define market practices in great detail. Switzerland, a decentralized country famous for its direct democracy based on consensus, typically grants financial intermediaries a certain autonomy in managing their own risk appetite.

Switzerland is also the country where one of the most prominent liberal economists Friedrich Hayek founded the famous Mont Pelerin Society. Even back in 1947, its members were worried about dangers to individual liberty, noting that “Even that most precious possession of Western Man, freedom of thought and expression, is threatened by the spread of creeds which, claiming the privilege of tolerance when in the position of a minority, seek only to establish a position of power in which they can suppress and obliterate all views but their own.”

Interestingly, a company called Mt Pelerin is operating today on the banks of the Geneva Lake, and this company is a crypto broker.

Buying crypto in Switzerland

Switzerland is far from the libertarian tax haven that many believe it is. It has succumbed to international pressure by de facto canceling its centuries-old banking secrecy tradition for foreign residents. Now, it is a member of the OECD treaty on the automatic exchange of information, and the zeal with which it applies FATF recommendations shows the willingness to shake off its previously sulfurous image. Indeed, FINMA decided to implement the travel rule for crypto starting from 0€, including for unhosted wallets, as early as 2017. In contrast, the “conservative” European Union will enforce this obligation only in 2024.

However, when the funds don’t explicitly leave the country, Switzerland still prefers to not micromanage its financial institutions and does not impose tons of paperwork for routine operations. It now stands as one of the rare countries on the old continent where people can buy crypto without being profiled. This means that companies like Mt Pelerin can process retail-size crypto transactions of CHF 1,000 per day without requiring the client to verify their identity.

This does not mean an open bar, but rather a higher degree of autonomy. For example, Mt Pelerin implements its own fraud detection methods and reserves the right to refuse transactions that raise suspicion. In contrast to the heavily bureaucratic procedures that other countries impose, this approach actually boasts a high success rate at filtering out fraudulent transaction attempts. After all, the firms operating on the front lines often have a better understanding of the ever-evolving fraud tactics than government officials.

For the sake of our societies, the Swiss approach to AML must be preserved and replicated. In a time when mass surveillance has become routine, and the CBDC development threatens to impose total control over our personal finances, we are closer than ever to the dystopia that Friedrich Hayek feared so much.

By controlling our day-to-day transactions, any government, even the best-intentioned, could manipulate our lives and effectively “obliterate any views but their own”. That’s why we buy Bitcoin, and that’s why we want to do so without KYC.

What about the criminals, you might ask? Shouldn’t we cut off their access to money to curb their interest in underground entrepreneurship?

Admittedly, after 20 years of modern AML, this thesis has proven itself wrong. So why not accept the fact that criminals enter our money flows and just follow that money to expose their operations? Continue reading Part 2 to learn more.

A special thank you to Biba Homsy, the Regulatory & Crypto Lawyer at Homsy Legal, and the team of Mt Pelerin for sharing their insights. 

This is a guest post by Marie Poteriaieva. Opinions expressed are entirely their own and do not necessarily reflect those of BTC Inc or Bitcoin Magazine.