Block & Order Weekly Docket (April 13, 2026)
The Gavel Drop | Gemini Nano Banana 2
By: Moish Peltz
- 🏛️ The 3rd Circuit shields Kalshi from state gambling enforcement—prediction markets just became federally untouchable
- ⚖️ xAI sues Colorado over HB 1468, arguing AI regulations violate the First Amendment—the opening salvo in tech’s constitutional assault on state AI laws
- đź’° FDIC drops a 191-page stablecoin rulebook implementing GENIUS
- đź“° Yuga Labs quietly settles with Ryder Ripps after three years of NFT trademark warfare
🎙️ THIS WEEK ON BLOCK & ORDER
Brandon Karpeles, CEO of Sovreign, sits down with Block & Order for an unflinching conversation about what what the industry still gets wrong about Bitcoin custody, trust, and failure.
🏛️ Crypto Policy & Regulation
- 3rd Circuit Shields Kalshi from New Jersey Gambling Enforcement. The U.S. Court of Appeals for the Third Circuit upheld a preliminary injunction barring New Jersey from enforcing its state gambling laws against Kalshi, the CFTC-regulated prediction markets exchange. The ruling crystallizes Kalshi (and the CFTC’s) position: prediction markets operating under CFTC jurisdiction now enjoy de facto federal preemption from state gambling enforcement. Kalshi Third Circuit Opinion
- FDIC Proposes GENIUS Act Rulebook. The FDIC Board approved a Notice of Proposed Rulemaking implementing the GENIUS Act’s requirements for stablecoin issuers. The 191-page proposal establishes prudential frameworks for “permitted payment stablecoin issuers” (PPSIs)—including reserve asset requirements, redemption mechanisms, capital standards, and risk management obligations. Comments are due by June 9, 2026. The proposal covers both stablecoin issuers that are bank subsidiaries and standalone issuers authorized under the Act. It also clarifies treatment of tokenized deposits. FDIC Notice of Proposed Rulemaking
- SEC Crypto Safe Harbor Heads to White House Review. SEC Chair Paul Atkins announced that the Commission’s crypto safe harbor proposal is heading to White House regulatory review. The proposal—designed to give crypto projects a compliance pathway during token distribution—represents the most significant shift in SEC crypto enforcement posture since Gensler’s departure. The safe harbor would provide issuers with a defined period to achieve “sufficient decentralization” before securities laws kick in. This is exactly what the industry has demanded for years: clarity on when a token stops being a security. Whether the White House signs off will determine whether the U.S. crypto regulatory framework actually becomes workable. The Block
- WLFI Threatens Justin Sun with Litigation Over Blacklist Backdoor Allegations. World Liberty Financial (WLFI), the Trump family-backed DeFi protocol, is threatening legal action against Justin Sun after Sun alleged the WLFI token contract contains a hidden “blacklist backdoor” that could freeze user tokens. WLFI called the claims defamatory and told Sun to “see you in court.” The technical dispute centers on whether the token contract’s admin functions constitute a “backdoor” or just normal smart contract governance capabilities. Sun’s team claims they discovered the functionality during security review; WLFI says Sun is engaging in competitive sabotage. But I thought code is law? The Block
- Yuga Labs Settles NFT Trademark War with Ryder Ripps. After more than three years of litigation (including multiple trips to the Ninth Circuit) Yuga Labs has settled its trademark infringement lawsuit against Ryder Ripps and Jeremy Cahen.The settlement terms weren’t disclosed, but the case established important precedent before ending: the district court ruled that NFT trademark claims are cognizable and that artistic appropriation doesn’t automatically shield commercial exploitation. What Yuga actually “won” is unclear. The litigation cost millions, dragged on for years, and the precedential value primarily benefits other NFT projects facing copycat collections (to the extent they can afford to litigate). Case: Yuga Labs, Inc. v. Ripps, 2:22-cv-04355 (C.D. Cal.)
- Blockchain Association Urges SEC Not to Let Incumbents “Slow the Future” of Tokenized Markets. The Blockchain Association filed comments with the SEC arguing that legacy financial institutions shouldn’t be allowed to use regulatory processes to delay tokenization of securities. The letter specifically calls out efforts by incumbent market participants to create compliance barriers that new entrants cannot practically meet. Blockchain Association SEC Comment
⚖️ AI Law & Liability
- xAI Sues Colorado, Arguing AI Law Violates First Amendment. xAI filed suit in the District of Colorado challenging the state’s AI Act (formerly SB 24-205, now HB 1468) on First Amendment grounds. The lawsuit argues that Colorado’s requirements for AI developers—including algorithmic impact assessments and mandatory disclosures—constitute unconstitutional speech compulsions. This is the tech industry’s opening constitutional salvo against the wave of state AI regulation. Colorado’s law, among the most comprehensive in the nation, requires “high-risk” AI deployers to assess and disclose potential harms from their systems. xAI argues this compels speech about subjective “harms” determinations that the government has no business mandating. If xAI prevails, the constitutional reasoning would extend to virtually every state AI law currently in force or under consideration. California, New York, Illinois—any state that requires AI disclosures, impact assessments, or explanations of algorithmic decision-making would face similar challenges. Case: X.AI LLC v. Weiser, 1:26-cv-01515 (D. Colo.) | Complaint
- AI Coding Boom on Collision Course with Copyright. The AI coding gold rush is heading toward a copyright reckoning. As AI-generated code proliferates across major software projects, the question of who owns output trained on copyrighted code—and who’s liable when AI reproduces protected snippets—is becoming unavoidable. Several major tech companies have implemented “code provenance” systems attempting to track whether AI-generated code derives from copyrighted training data. Others are indemnifying users against copyright claims. But the fundamental legal question remains unresolved: is code generated by AI trained on copyrighted repositories fair use, derivative work, or something else entirely? Law360
- Anthropic’s Mythos Model Can Find Zero-Days in Every Major OS—and That’s a Legal Problem. Anthropic’s security team published a technical preview of Claude Mythos this week demonstrating their AI model that can autonomously identify and exploit zero-day vulnerabilities across every major operating system and web browser. This raising novel liability and governance questions. If reverse engineering a binary goes from a months-long expert project to a cheap overnight AI run, what happens to trade secrets in software? Anthropic Red Team
📚 Sunday Night Readers
- Tyler Cowen’s Dialogue with Jonathan Zittrain. Tyler Cowen’s latest dialogue features Jonathan Zittrain, the Bemis Professor of International Law at Harvard and co-founder of the Berkman Klein Center. Their conversation ranges across AI governance, content moderation theory, the future of internet regulation, and why the “marketplace of ideas” metaphor has broken down. Zittrain’s central argument: the internet we built assumed good-faith participation at scale. That assumption failed. The question now isn’t how to restore the old internet but how to build governance structures for the one we have. Marginal Revolution
- Dean Ball on Anthropic’s Mythos Model and AI Governance. Dean Ball’s thread analyzing Anthropic’s new “Mythos” model raises important questions about AI governance at scale. Ball notes that Mythos represents a shift in Anthropic’s public positioning—from careful, safety-first messaging to more aggressive capability demonstrations. The thread explores the tension between AI safety rhetoric and competitive pressure to ship. When safety-focused labs start emphasizing capabilities, what does that signal about the industry’s actual incentive structures? @deanwball thread on X
- AI & Human Legal Reasoning: New SSRN Paper Examines the Gap. A new paper on SSRN examines the fundamental differences between artificial intelligence and human legal reasoning, arguing that current AI systems mimic the outputs of legal analysis without replicating the underlying cognitive processes. The paper distinguishes between “reasoning” (the process) and “rationalization” (post-hoc explanation), suggesting that current LLMs excel at the latter but lack the former. For legal AI applications, this distinction matters: a system that produces correct-looking legal analysis without understanding why it’s correct will fail in novel situations. Artificial Intelligence and Human Legal Reasoning, SSRN
- Adam Back Denies He’s Satoshi Nakamoto (Again). The New York Times published an investigation suggesting cypherpunk pioneer Adam Back may be Bitcoin’s pseudonymous creator. Back denied the claim, as he has before. The Satoshi mystery continues to generate more heat than light. NYT
Throughline
This week crystallized a fundamental tension in tech law: the collision between state regulatory ambition and federal preemption. In crypto, the Third Circuit told New Jersey that CFTC-regulated prediction markets are federally protected—states can’t enforce gambling laws against platforms operating under federal licenses. In AI, xAI is betting that the First Amendment does the same work: state-mandated AI disclosures are compelled speech that federal constitutional protections bar. The pattern is consistent. As states rush to regulate emerging technology—whether stablecoins, prediction markets, or AI systems—federal authority keeps reasserting itself. Sometimes through explicit preemption (GENIUS Act), sometimes through constitutional challenge (xAI), sometimes through regulatory interpretation (Kalshi). The result is the same: national rules trump local experimentation. Whether that’s good depends on your view of federalism. The crypto industry overwhelmingly prefers one federal regulator to 50 state ones. AI companies are clearly heading the same direction. But there’s a cost: when states can’t experiment, we lose the “laboratory of democracy” benefits that state-level regulation historically provided.
Block & Order Weekly Docket | Week of April 6 – April 12, 2026
For legal professionals navigating crypto, AI, and emerging technology law. The materials in this article are for informational purposes only and are not legal advice. Do not act upon this information without first seeking advice from an attorney licensed in your jurisdiction.

