Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    AfCFTA SEcretary General Calls For Renewed Transformative Partnership With The US To Accelerate Production And Trade

    July 1, 2025

    Bitcoin set for ranged trading as Q3 begins: Bitfinex

    July 1, 2025

    AfCFTA SEcretary General Calls For Renewed Transformative Partnership With The US To Accelerate Production And Trade

    July 1, 2025
    Facebook X (Twitter) Instagram
    Cryptify Now
    • Home
    • Features
      • Typography
      • Contact
      • View All On Demos
    • Typography
    • Buy Now
    X (Twitter) Instagram YouTube LinkedIn
    Cryptify Now
    You are at:Home » Open-source AI isn’t the end-all game—Bringing AI onchain is
    Crypto

    Open-source AI isn’t the end-all game—Bringing AI onchain is

    James WilsonBy James WilsonMay 6, 2025No Comments5 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

    In January 2025, DeepSeek’s R1 surpassed ChatGPT as the most downloaded free app on the US Apple App Store. Unlike proprietary models like ChatGPT, DeepSeek is open-source, meaning anyone can access the code, study it, share it, and use it for their own models.

    This shift has fueled excitement about transparency in AI, pushing the industry toward greater openness. Just weeks ago, in February 2025, Anthropic released Claude 3.7 Sonnet, a hybrid reasoning model that’s partially open for research previews, also amplifying the conversation around accessible AI. 

    Yet, while these developments drive innovation, they also expose a dangerous misconception: that open-source AI is inherently more secure (and safer) than other closed models.

    The promise and the pitfalls

    Open-source AI models like DeepSeek’s R1 and Replit’s latest coding agents show us the power of accessible technology. DeepSeek claims it built its system for just $5.6 million, nearly one-tenth the cost of Meta’s Llama model. Meanwhile, Replit’s Agent, supercharged by Claude 3.5 Sonnet, lets anyone, even non-coders, build software from natural language prompts.

    The implications are huge. This means that basically everyone, including smaller companies, startups, and independent developers, can now use this existing (and very robust) model to build new specialized AI applications, including new AI agents, at a much lower cost, faster rate, and with greater ease overall. This could create a new AI economy where accessibility to models is king.

    But where open-source shines—accessibility—it also faces heightened scrutiny. Free access, as seen with DeepSeek’s $5.6 million model, democratizes innovation but opens the door to cyber risks. Malicious actors could tweak these models to craft malware or exploit vulnerabilities faster than patches emerge.

    Open-source AI doesn’t lack safeguards by default. It builds on a legacy of transparency that has fortified technology for decades. Historically, engineers leaned on “security through obfuscation,” hiding system details behind proprietary walls. That approach faltered: vulnerabilities surfaced, often discovered first by bad actors. Open-source flipped this model, exposing code—like DeepSeek’s R1 or Replit’s Agent—to public scrutiny, fostering resilience through collaboration. Yet, neither open nor closed AI models inherently guarantee robust verification.

    The ethical stakes are just as critical. Open-source AI, much like its closed counterparts, can mirror biases or produce harmful outputs rooted in training data. This isn’t a flaw unique to openness; it’s a challenge of accountability. Transparency alone doesn’t erase these risks, nor does it fully prevent misuse. The difference lies in how open-source invites collective oversight, a strength that proprietary models often lack, though it still demands mechanisms to ensure integrity.

    The need for verifiable AI

    For open-source AI to be more trusted, it needs verification. Without it, both open and closed models can be altered or misused, amplifying misinformation or skewing automated decisions that increasingly shape our world. It’s not enough for models to be accessible; they must also be auditable, tamper-proof, and accountable. 

    By using distributed networks, blockchains can certify that AI models remain unaltered, their training data stays transparent, and their outputs can be validated against known baselines. Unlike centralized verification, which hinges on trusting one entity, blockchain’s decentralized, cryptographic approach stops bad actors from tampering behind closed doors. It also flips the script on third-party control, spreading oversight across a network and creating incentives for broader participation, unlike today, where unpaid contributors fuel trillion-token datasets without consent or reward, then pay to use the results.

    A blockchain-powered verification framework brings layers of security and transparency to open-source AI. Storing models onchain or via cryptographic fingerprints ensures modifications are tracked openly, letting developers and users confirm they’re using the intended version. 

    Capturing training data origins on a blockchain proves models draw from unbiased, quality sources, cutting risks of hidden biases or manipulated inputs. Plus, cryptographic techniques can validate outputs without exposing personal data users share (often unprotected), balancing privacy with trust as models strengthen.

    Blockchain’s transparent, tamper-resistant nature offers the accountability open-source AI desperately needs. Where AI systems now thrive on user data with little protection, blockchain can reward contributors and safeguard their inputs. By weaving in cryptographic proofs and decentralized governance, we can build an AI ecosystem that’s open, secure, and less beholden to centralized giants.

    AI’s future is based on trust… onchain

    Open-source AI is an important piece of the puzzle, and the AI industry should work to achieve even more transparency—but being open-source is not the final destination.

    The future of AI and its relevance will be built on trust, not just accessibility. And trust can’t be open-sourced. It must be built, verified, and reinforced at every level of the AI stack. Our industry needs to focus its attention on the verification layer and the integration of safe AI. For now, bringing AI onchain and leveraging blockchain tech is our safest bet for building a more trustworthy future.

    David Pinger

    David Pinger

    David Pinger is the co-founder and CEO of Warden Protocol, a company that focuses on bringing safe AI to web3. Before co-founding Warden, he led research and development at Qredo Labs, driving web3 innovations such as stateless chains, webassembly, and zero-knowledge proofs. Before Qredo, he held roles in product, data analytics, and operations at both Uber and Binance. David began his career as a financial analyst in venture capital and private equity, funding high-growth internet startups. He holds an MBA from Pantheon-Sorbonne University.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleIndia’s Supreme Court says Bitcoin trading is like a refined Hawala network
    Next Article The UK’s crypto ambitions: Navigating regulatory uncertainty
    James Wilson

    Related Posts

    Bitcoin set for ranged trading as Q3 begins: Bitfinex

    July 1, 2025

    Germany’s Deutsche Bank targets 2026 launch for crypto custody services

    July 1, 2025

    Analysts say Bitcoin could hit new ATH $116k this July

    July 1, 2025
    Leave A Reply Cancel Reply

    Top Posts

    Remittix (RTX) hits $4m presale as XRP holders take notice

    February 4, 2025

    Here’s why OKB price spiked 20% today

    February 4, 2025

    iDEGEN price prediction: Is this the AI agent token to buy?

    February 4, 2025

    Gate.io to list CYBRO token on Dec 14 after $7M presale success

    February 4, 2025
    Don't Miss

    AfCFTA SEcretary General Calls For Renewed Transformative Partnership With The US To Accelerate Production And Trade

    By William GarciaJuly 1, 2025

    … the Africa Inexperienced Industrialisation Initiative (AGII), which goals to place Africa … Source link

    Bitcoin set for ranged trading as Q3 begins: Bitfinex

    July 1, 2025

    AfCFTA SEcretary General Calls For Renewed Transformative Partnership With The US To Accelerate Production And Trade

    July 1, 2025

    Germany’s Deutsche Bank targets 2026 launch for crypto custody services

    July 1, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    Demo
    About Us
    About Us

    CryptifyNow: Your daily source for the latest insights, news, and analysis in the ever-evolving world of cryptocurrency.

    X (Twitter) Instagram YouTube LinkedIn
    Our Picks

    AfCFTA SEcretary General Calls For Renewed Transformative Partnership With The US To Accelerate Production And Trade

    July 1, 2025

    Bitcoin set for ranged trading as Q3 begins: Bitfinex

    July 1, 2025

    AfCFTA SEcretary General Calls For Renewed Transformative Partnership With The US To Accelerate Production And Trade

    July 1, 2025
    Lithosphere News Releases

    Colle AI’s iOS App Launch Brings Multichain NFT Creation to Mobile

    February 4, 2025

    AGII Transforms Web3 Infrastructure with AI-Optimized Smart Contracts

    February 4, 2025

    Colle AI (COLLE) Allocates $250M for AI Tool Development and Liquidity Growth on Solana

    February 4, 2025
    Copyright © 2025

    Type above and press Enter to search. Press Esc to cancel.