Compare commits

...

67 Commits

Author SHA1 Message Date
Mosh
66bd44e4b6 Merge pull request #723 from 2color/patch-1
chore: make the ipfsspec mention a bit more specific and add canonical urls
2026-02-04 16:53:51 +01:00
Daniel Norman
59ea9a2b9c fix: add canonical urls 2026-01-30 12:47:17 +01:00
Daniel Norman
c4165edbed make the ipfsspec mention a bit more specific 2026-01-30 12:40:26 +01:00
Marcin Rataj
e22e2aed72 ci(deploy): add concurrency to prevent duplicate artifacts
prevents multiple deploy workflows from running concurrently for the
same branch, which caused "multiple github-pages artifacts" errors
when builds completed close together.
2026-01-22 15:23:16 +01:00
Robin Berjon
b96fb2fcff Merge pull request #720 from darobin/2025-review
Year in review 2025
2026-01-22 14:59:34 +01:00
Robin Berjon
b894276863 Merge branch 'main' into 2025-review 2026-01-22 14:55:44 +01:00
Robin Berjon
ecf64efd1b rename pic 2026-01-22 14:40:30 +01:00
Robin Berjon
09e9dd871d Update src/_blog/2026-01-year-in-review.md
Co-authored-by: Mosh <1306020+mishmosh@users.noreply.github.com>
2026-01-22 08:31:51 -05:00
Robin Berjon
2d2d505e7c Update src/_blog/2026-01-year-in-review.md
Co-authored-by: Mosh <1306020+mishmosh@users.noreply.github.com>
2026-01-22 08:31:25 -05:00
Robin Berjon
b011df8875 Update src/_blog/2026-01-year-in-review.md 2026-01-22 08:29:22 -05:00
Robin Berjon
f08e08edb0 Update src/_blog/2026-01-year-in-review.md
Co-authored-by: Mosh <1306020+mishmosh@users.noreply.github.com>
2026-01-22 08:28:06 -05:00
Marcin Rataj
1dd2cccca8 chore: add Shipyard fleek migration blog post (#722)
* chore: add Shipyard fleek migration blog post

link to ipshipyard.com/blog/2026-ipfs-self-hosting-migration/

* chore: inline Shipyard fleek migration as full blog post

cross-posted from ipshipyard.com instead of linking out,
per review feedback on PR #722

* Apply suggestions from Mosh

Co-authored-by: Mosh <1306020+mishmosh@users.noreply.github.com>

* chore: add canonicalUrl to Shipyard cross-posts


---------

Co-authored-by: Mosh <1306020+mishmosh@users.noreply.github.com>
2026-01-21 23:10:25 +01:00
Mosh
3bf1044d39 Merge pull request #721 from ipfs/new-post/ipld-2025
New post: IPLD 2025 In Review
2026-01-21 11:43:27 -05:00
Mosh
a3d9de839e Update 2026-01-ipld-2025-review.md 2026-01-21 11:35:19 -05:00
Mosh
60268d3971 Update src/_blog/2026-01-ipld-2025-review.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2026-01-21 11:30:11 -05:00
Mosh
10d6cfb5e0 Change date 2026-01-20 11:45:39 -05:00
Mosh
729c99394e Add header image 2026-01-16 16:58:22 -05:00
Mosh
b3fda7cc07 Create 2026-01-ipld-2025-review.md 2026-01-16 16:56:24 -05:00
Robin Berjon
254da08e9f year in review 2025 2026-01-15 16:51:32 +01:00
Marcin Rataj
3d0569bfc5 chore(ci): add dnslink-action, cluster, and GitHub Pages
- add DNSLink update via ipshipyard/dnslink-action@v1
- add IPFS cluster deployment with 90-day expiry for non-main branches
- add GitHub Pages as HTTPS fallback
- Fleek hosting shuts down Jan 31, 2026

Related: https://github.com/ipshipyard/waterworks-community/issues/23
2026-01-05 22:06:16 +01:00
Marcin Rataj
0496efe55b chore: add Shipyard 2025 IPFS year in review
- add ecosystem content entry linking to ipshipyard.com blog post
- support description field in RSS feed entries
2025-12-19 23:13:15 +01:00
Marcin Rataj
64861f42fc chore: include release notes and ecosystem content in RSS feed
add postbuild script that enhances the VuePress-generated RSS feed
with items from special content pages (release notes, ecosystem,
news coverage, videos, tutorials, events) that are stored as YAML
arrays in frontmatter rather than individual markdown files.

only items published after 2025-11-25 are included to avoid
backfilling old content into subscribers' feeds.
2025-11-27 06:44:44 +01:00
Marcin Rataj
398b5cb18a chore: kubo 0.39.0 and provide sweep blog post 2025-11-27 05:59:59 +01:00
Marcin Rataj
f2ad344944 chore: kubo 0.38.2 2025-10-30 04:44:38 +01:00
Marcin Rataj
320be76513 chore: kubo 0.38.1 2025-10-09 03:37:43 +02:00
Marcin Rataj
a2f5adcdb9 chore: kubo 0.38.0 2025-10-02 21:28:00 +02:00
Daniel Norman
cd43d8bf9c add someguy cached router blog post (#718)
* delegated routing cached router

* update permalink

* add diagram

* edits

* more edits and refinement

* edit

* Apply suggestions from code review

Co-authored-by: Marcin Rataj <lidel@lidel.org>

* Apply suggestions from code review

Co-authored-by: Marcin Rataj <lidel@lidel.org>

* Apply suggestions from code review

Co-authored-by: Marcin Rataj <lidel@lidel.org>

* Apply suggestions from code review

Co-authored-by: Marcin Rataj <lidel@lidel.org>

* chore: rename blog post

* chore: update publish date

* Apply suggestions from code review

Co-authored-by: Marcin Rataj <lidel@lidel.org>

* chore: update publish date

* final edits

* fix typos

---------

Co-authored-by: Daniel N <2color@users.noreply.github.com>
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2025-09-05 12:50:21 +02:00
Daniel Norman
4102890b59 fix(ci): improve deployment by splitting into two workflows (#719)
Co-authored-by: Daniel N <2color@users.noreply.github.com>
2025-09-03 17:02:42 +02:00
Marcin Rataj
32040d1e90 chore: kubo 0.37.0 2025-08-27 22:47:38 +02:00
Robin Berjon
9fe361d557 Merge pull request #717 from darobin/ed25519
Ed25519 blog
2025-08-14 05:03:13 -04:00
Robin Berjon
a5039d53ae ed25519 bliog 2025-08-13 18:06:21 +02:00
Marcin Rataj
932cc2b9ae chore: link to original post 2025-08-05 21:56:34 +02:00
Daniel N
2a1a20227f chore: update canonical link 2025-08-04 15:54:13 +02:00
Daniel Norman
43720fe540 add js-libp2p devtools blog post (#716)
* feat: add dev tools blog post

* edits

* add header image

* more edits

* more edits

* Optimised images with calibre/image-actions

* final edit

---------

Co-authored-by: Daniel N <2color@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-25 13:33:13 +02:00
Marcin Rataj
fa6109bb57 chore: kubo 0.36.0 2025-07-14 23:06:26 +02:00
Marcin Rataj
12703fa248 fix: typo in newsletter-205.md 2025-05-26 14:36:31 +02:00
Daniel Norman
4220a5e7b5 publish newsletter issue 205 (#715)
Co-authored-by: Daniel N <2color@users.noreply.github.com>
2025-05-26 11:30:52 +02:00
Daniel N
22d008625d fix: url 2025-05-23 15:18:20 +02:00
Daniel N
2698a5bdab add latest youtube video 2025-05-23 13:48:38 +02:00
Marcin Rataj
c635eec6f8 chore: update releasenotes.md 2025-05-21 20:16:42 +02:00
Mosh
9c25aa58ae Merge pull request #714 from darobin/patch-1
Update to match white space processing settings
2025-05-15 22:29:03 +08:00
Robin Berjon
bc735e20e2 Update to match white space processing settings 2025-05-15 09:45:42 -04:00
Mosh
4d4655bf80 Merge pull request #713 from darobin/grants-announcement-2025
Spring 2025 utility grantees
2025-05-13 23:35:42 +08:00
Mosh
739affd649 Add additional team member names 2025-05-13 11:23:52 -04:00
Robin Berjon
0a45642c21 Cole 2025-05-13 14:06:54 +02:00
Robin Berjon
f4eb528df1 more idnexing 2025-05-12 16:19:34 +02:00
Robin Berjon
6f0977dbeb Update src/_blog/2025-05-grants.md 2025-05-12 10:18:08 -04:00
Robin Berjon
d154153730 Update src/_blog/2025-05-grants.md 2025-05-12 10:18:00 -04:00
Robin Berjon
26f2bbf2d8 Update src/_blog/2025-05-grants.md 2025-05-12 10:17:53 -04:00
Robin Berjon
c568922524 Update src/_blog/2025-05-grants.md
Co-authored-by: Bumblefudge <bumblefudge@learningproof.xyz>
2025-05-12 05:40:35 -04:00
Robin Berjon
80dc5bfbce Update src/_blog/2025-05-grants.md
Co-authored-by: Bumblefudge <bumblefudge@learningproof.xyz>
2025-05-12 05:40:21 -04:00
Robin Berjon
9777c23e53 Update src/_blog/2025-05-grants.md
Co-authored-by: Bumblefudge <bumblefudge@learningproof.xyz>
2025-05-12 05:39:57 -04:00
Robin Berjon
fb33e12c92 draft blog announcement 2025-05-12 10:03:32 +02:00
web3-bot
3a425661e5 ci: uci/copy-templates (#712)
* chore: add or force update .github/workflows/stale.yml

* chore: add or force update .github/workflows/generated-pr.yml
2025-05-01 09:37:19 +02:00
Daniel N
c53620b939 fix: publish date 2025-04-04 13:39:37 +02:00
Daniel N
81fb0f2330 add video to ipfs blog 2025-04-04 13:23:08 +02:00
Marcin Rataj
e825447b17 chore: fix typo 2025-03-25 23:55:55 +01:00
Marcin Rataj
47031049b7 kubo v0.34.1 2025-03-25 23:55:26 +01:00
Marcin Rataj
ca0bc47bc8 chore: kubo 0.34 2025-03-20 23:39:56 +01:00
Daniel N
acdd229661 fix: title case 2025-03-07 12:04:12 +01:00
Daniel N
8194eaff7f ci: bump to v1 release 2025-03-06 17:59:59 +01:00
Daniel N
52318151e8 ci: run on pull_request_target only 2025-03-06 17:59:29 +01:00
Justin Hunter
cfc9fea66d Static web manifesto blog post (#707)
* Static web manifesto blog post

* made suggested changes to static website manifesto post

* Update src/_blog/2025-02-static-web-manifesto

Co-authored-by: Daniel Norman <1992255+2color@users.noreply.github.com>

* Update src/_blog/2025-02-static-web-manifesto

Co-authored-by: Daniel Norman <1992255+2color@users.noreply.github.com>

* Update src/_blog/2025-02-static-web-manifesto

Co-authored-by: Daniel Norman <1992255+2color@users.noreply.github.com>

* added header image

* fixed image folder location

* chore: use custom version of action

* bump action version

* fix: action version

* fix: name of blog post file

---------

Co-authored-by: Justin Hunter <justin@pinata.cloud>
Co-authored-by: Daniel Norman <1992255+2color@users.noreply.github.com>
Co-authored-by: Daniel N <2color@users.noreply.github.com>
2025-03-06 17:47:11 +01:00
Daniel N
26ff40b506 bump deploy action version 2025-03-06 17:15:30 +01:00
Daniel N
b52cbb5604 chore: use custom version of deploy action 2025-03-06 17:11:02 +01:00
Daniel Norman
ed82724cdd ci: run workflow with secrets for forks too (#711)
Co-authored-by: Daniel N <2color@users.noreply.github.com>
2025-03-06 13:42:35 +01:00
Marcin Rataj
4ad47dd1b8 chore: add unit 2025-03-02 20:31:58 +01:00
37 changed files with 1220 additions and 48 deletions

View File

@@ -1,41 +0,0 @@
name: Build and Deploy to IPFS
permissions:
contents: read
pull-requests: write
statuses: write
on:
push:
branches:
- main
pull_request:
jobs:
build-and-deploy:
runs-on: ubuntu-latest
outputs: # This exposes the CID output of the action to the rest of the workflow
cid: ${{ steps.deploy.outputs.cid }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Build project
run: npm run build
- uses: ipfs/ipfs-deploy-action@v1
name: Deploy to IPFS
id: deploy
with:
path-to-deploy: dist
storacha-key: ${{ secrets.STORACHA_KEY }}
storacha-proof: ${{ secrets.STORACHA_PROOF }}
github-token: ${{ github.token }}

57
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
# Build workflow - runs for both PRs and main branch pushes
# This workflow builds the website without access to secrets
# For PRs: Runs on untrusted fork code safely (using pull_request event, not pull_request_target)
# For main: Builds and uploads artifacts for deployment
# Artifacts are passed to the deploy workflow which has access to secrets
name: Build
permissions:
contents: read
on:
push:
branches:
- main
pull_request:
branches:
- main
env:
BUILD_PATH: 'dist'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
# - For PRs: PR head commit
# - For pushes: the pushed commit
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci --prefer-offline --no-audit --progress=false
- name: Build project
run: npm run build
# Upload artifact for deploy workflow
- name: Upload build artifact
uses: actions/upload-artifact@v4
with:
name: blog-build-${{ github.run_id }}
path: ${{ env.BUILD_PATH }}
retention-days: 1

105
.github/workflows/deploy.yml vendored Normal file
View File

@@ -0,0 +1,105 @@
# Deploy workflow - triggered by workflow_run after successful build
# This workflow has access to secrets but never executes untrusted code
# It only downloads and deploys pre-built artifacts from the build workflow
# Security: Fork code cannot access secrets as it only runs in build workflow
# Deploys to IPFS for all branches
name: Deploy
# Explicitly declare permissions
permissions:
actions: read
contents: read
pull-requests: write
statuses: write
on:
workflow_run:
workflows: ["Build"]
types: [completed]
env:
BUILD_PATH: 'blog-build'
# Prevent concurrent deployments to the same target
# This avoids the "multiple github-pages artifacts" error
concurrency:
group: deploy-${{ github.event.workflow_run.head_branch }}
cancel-in-progress: true
jobs:
deploy-ipfs:
if: github.event.workflow_run.conclusion == 'success'
runs-on: ubuntu-latest
outputs:
cid: ${{ steps.deploy.outputs.cid }}
environment:
name: 'ipfs-publish'
steps:
- name: Download build artifact
uses: actions/download-artifact@v4
with:
name: blog-build-${{ github.event.workflow_run.id }}
path: ${{ env.BUILD_PATH }}
run-id: ${{ github.event.workflow_run.id }}
github-token: ${{ github.token }}
- name: Deploy to IPFS
uses: ipshipyard/ipfs-deploy-action@v1
id: deploy
with:
path-to-deploy: ${{ env.BUILD_PATH }}
cluster-url: "/dnsaddr/ipfs-websites.collab.ipfscluster.io"
cluster-user: ${{ secrets.CLUSTER_USER }}
cluster-password: ${{ secrets.CLUSTER_PASSWORD }}
cluster-pin-expire-in: ${{ github.event.workflow_run.head_branch != 'main' && '2160h' || '' }}
storacha-key: ${{ secrets.STORACHA_KEY }}
storacha-proof: ${{ secrets.STORACHA_PROOF }}
github-token: ${{ github.token }}
dnslink-update:
runs-on: ubuntu-latest
needs: deploy-ipfs
if: github.event.workflow_run.head_branch == 'main'
environment:
name: 'cf-dnslink'
url: "https://blog-ipfs-tech.ipns.inbrowser.link/"
steps:
- name: Update DNSLink
uses: ipshipyard/dnslink-action@v1
with:
cid: ${{ needs.deploy-ipfs.outputs.cid }}
dnslink_domain: 'blog-ipfs-tech.dnslinks.ipshipyard.tech'
cf_zone_id: ${{ secrets.CF_DNS_ZONE_ID }}
cf_auth_token: ${{ secrets.CF_DNS_AUTH_TOKEN }}
github_token: ${{ github.token }}
set_github_status: true
deploy-gh-pages:
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'main'
runs-on: ubuntu-latest
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
steps:
- name: Download build artifact
uses: actions/download-artifact@v4
with:
name: blog-build-${{ github.event.workflow_run.id }}
path: blog-build
run-id: ${{ github.event.workflow_run.id }}
github-token: ${{ github.token }}
- name: Upload Pages artifact
uses: actions/upload-pages-artifact@v3
with:
path: blog-build
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

14
.github/workflows/generated-pr.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
name: Close Generated PRs
on:
schedule:
- cron: '0 0 * * *'
workflow_dispatch:
permissions:
issues: write
pull-requests: write
jobs:
stale:
uses: ipdxco/unified-github-workflows/.github/workflows/reusable-generated-pr.yml@v1

View File

@@ -1,8 +1,9 @@
name: Close and mark stale issue
name: Close Stale Issues
on:
schedule:
- cron: '0 0 * * *'
workflow_dispatch:
permissions:
issues: write
@@ -10,4 +11,4 @@ permissions:
jobs:
stale:
uses: pl-strflt/.github/.github/workflows/reusable-stale-issue.yml@v0.3
uses: ipdxco/unified-github-workflows/.github/workflows/reusable-stale-issue.yml@v1

View File

@@ -0,0 +1,96 @@
'use strict'
/**
* Enhances the RSS feed to include items from special content pages
* (release notes, ecosystem content, etc.) that are stored as YAML
* arrays in frontmatter rather than individual markdown files.
*/
const fs = require('fs')
const path = require('path')
const xml2js = require('xml2js')
const matter = require('gray-matter')
const dayjs = require('dayjs')
const xmlFilePath = 'dist/index.xml'
// Only include items published after this date (to avoid backfilling old content)
const CUTOFF_DATE = dayjs('2025-11-25')
// Content types to include in the unified feed
// Each item in the data array should have: title, date, path (URL)
const CONTENT_SOURCES = [
{ file: 'releasenotes.md', category: 'Release Notes' },
{ file: 'ecosystemcontent.md', category: 'Ecosystem' },
{ file: 'newscoverage.md', category: 'News Coverage' },
{ file: 'videos.md', category: 'Videos' },
{ file: 'tutorials.md', category: 'Tutorials' },
{ file: 'events.md', category: 'Events' },
]
function parseContentFile(filename) {
const filepath = path.resolve('src/_blog', filename)
try {
const content = fs.readFileSync(filepath, 'utf8')
const { data } = matter(content)
const now = dayjs()
return (data.data || []).filter((item) => {
if (item.hidden) return false
if (item.publish_date && dayjs(item.publish_date).isAfter(now)) return false
return dayjs(item.publish_date || item.date).isAfter(CUTOFF_DATE)
})
} catch (err) {
console.error(`Warning: Could not read ${filename}:`, err.message)
return []
}
}
function itemToRssEntry(item, category) {
return {
title: [item.title],
link: [item.path],
pubDate: [dayjs(item.date).toDate().toUTCString()],
description: [item.description || item.title],
category: [category],
guid: [{ _: item.path, $: { isPermaLink: 'true' } }],
}
}
async function enhanceFeed() {
let xmlData, parsed
try {
xmlData = fs.readFileSync(xmlFilePath, 'utf8')
parsed = await xml2js.parseStringPromise(xmlData)
} catch (err) {
console.error('Could not read/parse RSS feed:', err.message)
process.exit(1)
}
// Blog posts link to the blog domain, special content links externally
const blogDomain = parsed.rss.channel[0].link[0]
const existingItems = (parsed.rss.channel[0].item || []).filter(
(item) => item.link[0].startsWith(blogDomain)
)
// Parse special content and convert to RSS items
const additionalItems = CONTENT_SOURCES.flatMap((source) =>
parseContentFile(source.file).map((i) => itemToRssEntry(i, source.category))
)
// Deduplicate by guid
const seen = new Set()
const allItems = [...existingItems, ...additionalItems]
.filter((item) => {
const guid = item.guid?.[0]?._ || item.guid?.[0] || item.link[0]
return !seen.has(guid) && seen.add(guid)
})
.sort((a, b) => new Date(b.pubDate[0]) - new Date(a.pubDate[0]))
parsed.rss.channel[0].item = allItems
const builder = new xml2js.Builder({ xmldec: { version: '1.0', encoding: 'UTF-8' } })
fs.writeFileSync(xmlFilePath, builder.buildObject(parsed))
console.log(`Enhanced RSS feed: ${allItems.length} items (${existingItems.length} posts + ${additionalItems.length} special)`)
}
exports.enhanceFeed = enhanceFeed

View File

@@ -2,10 +2,15 @@
'use strict'
const { enhanceFeed } = require('./enhance-feed')
const { generateIndexFile } = require('./latest-posts')
const { generateNewsFile } = require('./latest-news')
const { generateVideosFile } = require('./latest-videos')
generateIndexFile()
generateNewsFile()
generateVideosFile()
// Enhance RSS feed first (adds release notes, ecosystem content, etc.)
// then generate index.json from the enhanced feed
enhanceFeed().then(() => {
generateIndexFile()
generateNewsFile()
generateVideosFile()
})

View File

@@ -0,0 +1,57 @@
---
title: The Static Website Manifesto
description: 'Building websites used to be fun. Lets bring back the joy of putting your work online by using open protocols like IPFS.'
author: Justin Hunter
date: 2025-03-06
permalink: '/2025-03-static-web-manifesto/'
header_image: '/static-website-manifesto-header.png'
canonicalUrl: https://orbiter.host/blog/the-static-website-manifesto
tags:
- 'community'
---
_This article originally appeared on [Orbiter's blog](https://orbiter.host/blog/the-static-website-manifesto)_
Building websites used to be fun. Lets bring back the joy of putting your work online. This is our manifesto. Orbiter is built on IPFS using [IPCM](https://ipcm.dev) to map sites to Base smart contracts.
## Why IPFS
Websites are open and accessible for anyone. They are the perfect match for IPFS. IPFS is a peer-to-peer storage protocol that allows anyone to access a piece of content as long as they know the content identifier (CID) and there's an online node providing the data. This is a perfect match for the open web. It creates a level of availability not possible with traditional website file storage.
Additionally, because IPFS is content-addressed, that means each cid references a specific _version_ of a site. This created a built-in versioning system, similar to what you get when you explore the Wayback Machine.
Orbiter uses IPFS behind the scenes and integrates these concepts seamlessly for people hosting websites on the platform. But beyond that, Orbiter uses these concepts to make the web fun again.
## Bringing Back the Fun of the Web
Remember when putting a website online was as simple as uploading some files? The web used to be a playground for everyone to explore. Sure, there were technical hurdles - services like Geocities and Angelfire or uploading HTML using FTP. While FTP had its problems, it offered a straightforward path to launching a site you owned and controlled.
![geocities](../assets/geocities_old_web.png)
## How We Lost Our Way
Today's web is increasingly locked behind walled gardens. Social media has replaced the individual blogs and personal websites that flourished a decade ago. While changing preferences play a role, convenience is the real culprit. It's far easier to post on Facebook or Twitter than to wrangle with modern web development tools, complex deployment processes, and the challenge of finding an audience.
## The Problem with Modern Web Development
Launching a website today isn't fun anymore. While we have powerful tools at our disposal with incredible capabilities, they come at the cost of simplicity. The basic web stack - HTML for structure, CSS for style, and a sprinkle of JavaScript for interactivity - still works perfectly well. Yet much of web development has shifted toward complex frameworks that can make building even simple sites feel like a chore.
Who wants to wait 80 seconds for their code to build into a deployable package? The modern expectation is that you'll use source control, host your code on GitHub, and implement a full deployment pipeline. These tools are valuable for many projects, but they're overkill for many websites. Sometimes, you just want to take your folder of HTML files, upload them, and call it a day.
## A Return to Simplicity
This philosophy doesn't just apply to personal websites. Static web applications could benefit from returning to the web's simpler roots. A static web app is just a website with more JavaScript - we don't need to complicate it further. While larger projects might benefit from source control and automated deployments, the process should be flexible. Want to use CI/CD and other modern tools? Great! Prefer to simply upload files? That works too.
## Introducing Orbiter
As server-side rendering gains popularity, we're taking a stand for static sites. Orbiter is here for:
* The people building websites that don't need servers
* The developers tired of waiting minutes for builds when they could have a working site in seconds
* The web designers writing their first HTML
* The marketers moving from WordPress to simple static HTML
* The developers creating their own static site generators
* Anyone who wants to share their work on the web
Don't feel like learning complex deployment processes? No problem. Orbiter gives you the freedom to work the way you want. We've built the simplest static site hosting platform on the web because we believe in making the web flexible, fun, and fast again.
Ready to make web hosting fun again? [Join us at Orbiter and put your website online in seconds](https://app.orbiter.host?ref=blog).

View File

@@ -0,0 +1,38 @@
---
title: Spring 2025 IPFS Utility Grantees
description: "We're delighted to announce the grantees for the Spring 2025 round of IPFS Utility Grants."
author: Robin Berjon
date: 2025-05-12
permalink: '/2025-05-grants/'
header_image: '/utility-grants.png'
tags:
- grants
- funding
- ecosystem
---
The IPFS Implementations Grants program exists to advance the development, growth, and impact of the IPFS project through a focus on developer choice and availability. We provide financial support to projects and teams working to make IPFS accessible to more developer communities.
We recently ran [the Spring 2025 grant cycle for utilities](https://ipfsgrants.io/utility-grants/), which supports developers creating essential utilities, libraries, and tooling for the IPFS ecosystem. It was a tight competition with strong contenders and we're delighted with the grantees who came out of this round.
## rsky-satnav CAR Explorer from Rudy Fraser, BlackSky
If you're anywhere near work on the [AT Protocol](https://atproto.com/) then you surely know Rudy Fraser, among other things for his work on [BlackSky](https://www.blackskyweb.xyz/) and the [rsky](https://github.com/blacksky-algorithms/rsky) (say "risky") projects.
The grant will go to [rsky-satnav](https://github.com/blacksky-algorithms/rsky/tree/main/rsky-satnav) (Structured Archive Traversal, Navigation And Verification — we do appreciate a quality acronym), a local-first and user-friendly [CAR](https://dasl.ing/car.html) explorer for AT Protocol.
CAR archives are a very convenient part of the IFPS ecosystem, used to package up multiple CID-addressed resources in one bundle, and AT Protocol PDSs rely on them for data exports. But end users, even technical ones, have found dealing with CAR files challenging due to a lack of tooling. We really look forward to playing with rsky-satnav ourselves!
## CAR Indexing from Ben Lau, Basile Simon, and Yurko Jaremko, Starling Lab
Another issue with CAR files is that they are as diverse as the data usecases and ergonomics of the IPFS ecosystem: while Filecoin uploading returns a CAR file, it sidesteps the UnixFS and thus most CAR tooling cannot reconstruct or navigate its contents. As these big-data archive files are not introspectable with UnixFS tools, the [Starling Lab](https://starlinglab.org/) team is open-sourcing some indexing tools they created internally which create a _private index_ of Filecoin uploads, rounding out a historic tooling/interop gap in the ecosystem.
Ben, Basile, and Yurko are developing a browser-based tool to help locate contents within [Filecoin CAR archives](https://spec.filecoin.io/systems/filecoin_files/piece/), without relying on public indexing services. This is a stepping stone to more general solutions for CAR indexing. It's definitely going to boost that part of the ecosystem!
## DASL Testing from Cole Anthony Capilongo, Hypha Worker Co-operative
Not all heroes wear capes, many of the cooler ones write tests. Tests are important in development, but they are particularly important when you're creating interoperable standards. The difference between a standard and a random piece of paper isn't that the standard was blessed by a special standards organization — there are plenty of worthlessly blessed pieces of paper out there — but rather that the standard has a comprehensive test suite passed by multiple independent production-quality implementations.
With this in mind, we're excited to also support [Cole Anthony Capilongo](https://hypha.coop/people/#Cole%20Anthony%20Capilongo) (from the mighty [Hypha](https://hypha.coop/) working on a test suite for [DASL](https://dasl.ing/)'s [dCBOR42](https://dasl.ing/dcbor42.html) (an interoperable subset of IPLD for deterministic data encoding) and [CIDs](https://dasl.ing/cid.html) (a usable subset of IPFS CIDs). Cole will exercise the tests against multiple implementations and help us fix bugs in the specifications too. It's going to be fan<em>test</em>ic.
And beyond that, stay tuned: we will have more annoucements coming.

View File

@@ -0,0 +1,84 @@
---
title: "Ed25519 Support in Chrome: Making the Web Faster and Safer"
description: "Ed25519 is now supported in Chrome, finally joining the other browsers after much effort."
author: Bumblefudge
date: 2025-08-13
permalink: '/2025-08-ed25519/'
header_image: '/ed25519.jpg'
tags:
- funding
- ecosystem
- browsers
---
We're happy to share that Ed25519 is now supported in Chrome (version). Following Ed25519 support in [Firefox 129](https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Releases/129) in August 2024 and [Safari 17.0](https://developer.apple.com/documentation/safari-release-notes/safari-17-release-notes), Chrome finally following suit in 137 in May of this year. Ed25519 is now supported in every major browser engine, reaching [79% and counting](https://caniuse.com/?search=ed25519) of web users.
Ed25519 is a type of key, most known because it is the smallest and fastest commonly-available key for generating and verifying [elliptic curve cryptography](https://en.wikipedia.org/wiki/Elliptic-curve_cryptography) signatures. Digital signature algorithms let you prove that a piece of data was created by someone with a specific private key, without needing to reveal that key. They're essential for authenticating software updates, Git commits, cryptocurrency transactions, and, in distributed networks like IPFS, for authenticating peer identities and quickly establishing trust among peers.
## What Ed25519 in WebCrypto API means for developers
Why is Ed25519 support in browsers valuable? If youre coming from traditional web development, youre probably familiar with the common hashes (SHA-256) and key types (RSA) already there, which are “table stakes” (often even available through standard libraries) unless you have very niche cryptographic needs. Built-in support for Ed25519 represents a significant upgrade across the board, bringing Ed25519 into that category of “table stakes” which developers can stop worrying about and just take for granted.
**The key advantages**:
* **Smaller footprint**: Ed25519 keys are just 32 bytes (256 bits), compared to 256+ bytes for equivalent RSA security. Signatures are 64 bytes versus 256+ for RSA.
* **Faster operations**: Signature verification is roughly 10x faster than RSA and consistently faster than ECDSA. Signature generation is also faster.
* **Security by design**: Ed25519 was built from the ground up to resist timing attacks, uses deterministic signatures (no random number generation that can go wrong), and provides roughly 128-bit security level — equivalent to a 3072-bit RSA key.
* **Simpler implementation**: Unlike ECDSA, Ed25519 doesn't require developers to handle tricky parameters or worry about malleability attacks.
Ed25519 signatures keys have long been one of the top key types and signatures everywhere except the browser, used to power everyday connection protocols like SSH (remote terminal) and sFTP (file transfer) and PGP (encrypted email) and all kinds of things. But until now, if your web app needed to verify an Ed25519 signature (say, to validate a Git commit or authenticate with an SSH-based service), you had to bundle your own cryptography libraries, which can often be half the “weight” (download size) of your web app that uses Ed25519 signatures today!
Of course, Ed25519 signatures dont just make it easier to support long-standing protocols, they also power lots of cutting-edge and forward-looking identity systems as well. For example, Radicle, an open-world/decentralized git community, uses EdDSA keys as usernames and requires all repositories to be signed by them (since almost all git tooling across languages defaults to an EdDSA key manager at the OS level).
Identity systems like Radicle arent the only beneficiaries of being able to produce or check signatures in the browser; most “local-first” software, and of course all software based on content-addressable data benefit as well. For example, many web3 applications and distributed systems generally use content-addressed envelopes and documents rather than flat, traditional authorization tokens. Two of these, UCANs and BeeHive (the local-first/CRDT variant of UCANs), both scale up CRDTs and distributed workflows by giving every process, container, and resource an EdDSA key, so that all these authorization documents can be quickly and cheaply checked at any trust-boundary, including in the end-users browser. Making a safe verifier or web-view on any of these systems just got days, maybe weeks faster to build, which results in smaller binaries so much smaller they can go more places as a result.
With Ed25519 now in WebCrypto, these operations become a simple browser API call — no external dependencies, no bundle bloat, and better performance.
That qualifier “lightweight” is the main contribution here (and a counterintuitively very consequential one) to the commons of an increasingly cryptographic web. Tomorrows web is increasingly going to accrue ambient verifiability (even for publishers that dont pay the CDN tax!), and many other downstream efficiencies fall out from being able to link up EdDSA keys which are already ubiquitous everywhere else in the average end-users operating systems and platforms.
## The Journey from Wanting to Having
Most of the updates youll see on this blog are about making IPFS work better with todays web. Recently, we wrote about enabling true peer-to-peer connections through the HTTPS-only web with [AutoTLS](https://blog.libp2p.io/autotls/), and before that there was a [steady stream](https://blog.ipfs.tech/2024-shipyard-improving-ipfs-on-the-web/) of improvements from the IPFS Shipyard team, which make it easier to discover and distribute IPFS data smoothly over the web.
But sometimes, if you dream big enough and invest on a long enough time horizon, you can actually make the web work a little more like IPFS. In recent years, theres been a broader movement towards a more hash-based, trustless web in the form of cryptography creeping into web standards as the reactive, edge-cached, CDN-enabled web shifts user expectations towards better and better UX. Intervening in this trend and putting up the funding to do the slow, relentless pushing in web standards plants the seeds of a web more aligned to [IPFS Principles](https://specs.ipfs.tech/architecture/principles/).
### How the web standards are governed, practically
The web has, in fact, been worldwide since most of us can remember, and even those of us old enough to remember its pre-ubiquitous phase have trouble remember the chaos of web development in those days when each browser took a blasé approach to HTML interoperability and how the details of HTTPS and TCP/IP were to be configured stably. Standardizing the web was almost as long and complex a technosocial process as making it worldwide or making it profitable, and it involved a lot of snarking on mailing lists, long before GitHub threads and Slack servers opened up new snarking surfaces.
The iterating and fine-tuning of software standards to a point where multiple completely-independent browsers could provably be uniform in how they render a given chaotic markup language (to say nothing of JavaScript!) is an ongoing and massive infrastructural accomplishment that battles on to this day. The janitorial work is led by a global-ish community, with lots of volunteers, underpaid experts, and tired academics providing extra patience and human-power. Their devotion keeps the web a durable and open platform for information, which could revert to mere plumbing for commerce if left entirely to large commercial players.
When you think about web standards, people naturally think about the Big Decisions and governance quagmires around specific technologies: how JavaScript can be sandboxed and policy-bound to safely run across domains, how CSS can get inherited and nuanced and last-mile exceptioned as it cascades over those same domains, etc. Smaller languages, thin protocols that run over the web (federated identity, social web, payments), accessibility and localization standards, and Big Semantics round out the rest of what we call “web standards,” mostly standardized at the World Wide Web Consortium (W3C) with the regular collaboration from adjacent standards development organizations.
Historically, most debate and standardization around cryptography happened one level below the web at [IETF](http://ietf.org/): as a rule of thumb, you could say “the web” is for humans and their messy semantics, while “the internet” is a superset powered by machines who can standardize on the stuff that vanishingly few humans even understand. But increasingly, to make the web more secure (think: passkeys, FIDO, wallets) web standards increasingly need cryptographical ground-truths in common web-wide as well. These have made one of the most closely-watched groups at the W3C the “WebCrypto API”-- where browser developers agree to interfaces that build basic cryptographic building blocks into the browser itself, incentivizing reuse and transparency by offloading crypto complexity onto the “web platform” itself (the ground assumptions about how the whole web will work in any browser).
The WebCrypto interface gives all web developers a powerful take-out window to order reliable and determinic signatures and hashes from, which enables all kinds of powerful building blocks in a few lines of code. For instance, the increasingly generalized [SubResource Integrity](https://w3c.github.io/webappsec-subresource-integrity/) pattern allows many of the heaviest parts of web development (big media files, JavaScript bundles that change often due to security and dependency trees) to be “checksummed”, i.e. integrity-protected by a hash. This dovetails nicely with the generalized integrity-protection that IPFS brings to web development; speeding up the browsers support for more sub-resource integrity mechanisms also makes the web much more IPFS-ready and makes IPFS more intuitive to tomorrows web developers.
## Igalia, Standards Work and Actual Adoption
So how exactly did IPFS support directly bring about Ed25519 becoming a browser default? The answer involves a three-year collaboration with [Igalia](https://www.igalia.com/), a worker-owned open source consultancy and co-operative that has become a major contributor to browser development since 2001.
Its hard to overstate just how much work goes into even a comparatively simple web platform feature such as this one. If you want a peek at what happens behind the curtain, I strongly recommend [Javi blog post from February 2025 summarizing the progress to date](https://blogs.igalia.com/jfernandez/2025/02/28/can-i-use-secure-curves-in-the-web-platform/).
The specific focus of this collaboration was getting Ed25519 “into the browser” by default, with an eye to tools like [Verified Fetch](https://blog.ipfs.tech/verified-fetch/). Verified Fetch not only fetches content from IPFS by CID like any other client, but crucially it also verifies that each block matches its CID. Since Verified Fetch needs to be able to install as close to “invisibly” as possible, it is a huge UX improvement to reduce its download size by a two-digit percentage, which is achieved by outsourcing the math and the logic to the browsers built-in library.
A PR on a browser engine is significantly more complex than a typical open source contribution. It requires extensive coordination across implementations, specifications, security review processes, performance review processes, quality assurance, and more. Javier (“Javi”) Fernandez at Igalia drove [PRs](https://github.com/whatwg/html/issues/9158) against all three of the major independent browser engines in parallel, juggling change requests and nits and corner-cases to make sure all three would handle any inputs the exact same way under any configuration or combination of extensions.
The work began with identifying and [fixing a bug](https://github.com/w3c/webcrypto/pull/345) in the W3C specification governing the WebCrypto interface. Over three years, Javi systematically addressed technical challenges in 3 browsers codebases, from low-level C implementations to API surface design.
## Whats Next
Ed25519 support went live in May 2025 [Chrome 137](https://developer.chrome.com/blog/chrome-137-beta#ed25519_in_web_cryptography), joining every other major browser before it.
Typically, it takes 2-3 years for new browser versions to proliferate across the user landscape. We anticipate that around 2027, developers can confidently start relying on simple and stable support for Ed25519 in most users browsers.
As this transition happens, developers can drop weight from our software packages, complexity from testing and maintenance, and load-time and bandwidth. In aggregate, that makes the web a lot more powerful for everyone, and more aligned with the IPFS principles: simple, modular, and verifiable. Its a rare case of everybody winning, and the entire web getting a little more stable and good.
Thanks to Protocol Labs (who initiated the collaboration with Igalia), the [IPFS Foundation](???), [Open Impact Foundation](https://openimpact.foundation/), and [WebTransitions.org](https://webtransitions.org/) for continuing to shepherd this initiative.
More IPFS initiatives and collaborations to make the web more simple, modular, and verifiable are in progress. They include [ElectronJS build variants](https://github.com/electron/electron/issues/42455) (to support better protocol handling), more useful [protocol handling in browser extensions](https://github.com/ipfs/in-web-browsers/issues/212) ([webtransitions theme](https://github.com/webtransitions/initiatives/issues/10)), as well (we hope) streaming support in browser cryptography APIs!

View File

@@ -0,0 +1,74 @@
---
date: 2025-07-25
permalink: /2025-js-libp2p-helia-devtools/
title: 'Debugging Superpowers With the New js-libp2p Developer Tools'
description: 'Discover the new js-libp2p developer tools from Shipyard that provide real-time debugging capabilities for js-libp2p and Helia nodes in both browsers and Node.js.'
canonicalUrl: https://ipshipyard.com/blog/2025-js-libp2p-devtools/
author: Daniel Norman
header_image: /dev-tools.jpg
tags:
- ipfs
- devtools
- js-libp2p
- browsers
- node.js
- extension
- Interplanetary Shipyard
---
_This blog post [originally appeared on the Interplanetary Shipyard blog](https://ipshipyard.com/blog/2025-js-libp2p-devtools/)_
[Interplanetary Shipyard](https://ipshipyard.com/) is thrilled to share [js-libp2p inspector](https://github.com/ipshipyard/js-libp2p-inspector/), the new developer tools for debugging and inspecting js-libp2p and Helia, for use both in browsers and Node.js.
Debugging is an essential part of software development, and having the right tools can make all the difference. The new developer tools provide a user-friendly interface to inspect your libp2p nodes in real-time, tightening the feedback loop and making it easier to diagnose issues.
## Inspecting and monitoring throughout the development lifecycle
These new developer tools expand the existing set of metrics implementations for js-libp2p, which include [metrics-prometheus](https://github.com/libp2p/js-libp2p/tree/main/packages/metrics-prometheus) and [metrics-opentelemetry](https://github.com/libp2p/js-libp2p/tree/main/packages/metrics-opentelemetry).
While Prometheus and OpenTelemetry are for monitoring and tracing in production (though not exclusively), the inspector is intended for use during development. Together, these tools provide a comprehensive solution for monitoring and debugging js-libp2p and Helia nodes throughout the development lifecycle.
## Getting started
To inspect a js-libp2p or Helia node, you need to pass the metrics implementation from the [`@ipshipyard/libp2p-inspector-metrics`](https://www.npmjs.com/package/@ipshipyard/libp2p-inspector-metrics) package to your js-libp2p or Helia factory:
### js-libp2p example
```js
import { createLibp2p } from 'libp2p'
import { inspectorMetrics } from '@ipshipyard/libp2p-inspector-metrics'
const node = await createLibp2p({
metrics: inspectorMetrics()
})
```
### Helia example
```js
import { createHelia } from 'helia'
import { inspectorMetrics } from '@ipshipyard/libp2p-inspector-metrics'
const node = await createHelia({
libp2p: {
metrics: inspectorMetrics()
},
})
```
Once you have your node running with the inspector metrics enabled, you can start inspecting it using the browser extension or the Electron app.
The following video walks through setup and usage with both Node.js and browser environments:
@[youtube](AKNGtn7EZxI)
## Try the new developer tools
The new developer tools consist of several npm packages that work together:
- [`@ipshipyard/libp2p-devtools`:](https://github.com/ipshipyard/js-libp2p-inspector/tree/main/packages/libp2p-devtools) Browser DevTools extension to inspect a libp2p node running in a web page.
- [`@ipshipyard/libp2p-inspector`:](https://github.com/ipshipyard/js-libp2p-inspector/tree/main/packages/libp2p-inspector) Electron based app to inspect a running libp2p node in Node.js.
- [`@ipshipyard/libp2p-inspector-metrics`:](https://github.com/ipshipyard/js-libp2p-inspector/tree/main/packages/libp2p-inspector-metrics) Metrics implementation that instruments the libp2p node such that it can be inspected by the inspector or the browser extension. This package needs to be imported in your js-libp2p based application to enable inspection.
- [`@ipshipyard/libp2p-inspector-ui`:](https://github.com/ipshipyard/js-libp2p-inspector/tree/main/packages/libp2p-inspector-ui) The user interface shared by both the Electron inspector and the browser extension.
We encourage you to try out the new developer tools and provide feedback. You can find the source code on [GitHub](https://github.com/ipshipyard/js-libp2p-inspector).

View File

@@ -0,0 +1,228 @@
---
title: 'Faster Peer-to-Peer Retrieval in Browsers With Caching in the Delegated Routing HTTP Server'
description: 'How caching and active peer probing in the Someguy, the Delegated Routing server accelerates peer-to-peer content retrieval in browsers and mobile applications.'
author: Daniel Norman
date: 2025-09-05
permalink: /2025-delegated-routing-caching/
header_image: /someguy-cache/cover.png
tags:
- ipfs
- someguy
- delegated routing
- performance
- mobile
- browsers
- caching
---
## TL;DR
Last year we shipped a major improvement to [Someguy](https://github.com/ipfs/someguy/pull/90), the HTTP Delegated Routing API for the Amino DHT and IPNI. The update introduced a cached address book and active peer probing for DHT peers. This change considerably increases the ratio of providers with addresses returned, which in turn accelerates peer-to-peer content retrieval in browsers and mobile applications. It's included in the [v0.7.0 release](https://github.com/ipfs/someguy/releases/tag/v0.7.0) of Someguy. Follow along for the full story.
## What is Someguy and why it matters
[Someguy](https://github.com/ipfs/someguy) is a [Delegated Routing HTTP API](https://specs.ipfs.tech/routing/http-routing-v1/) for proxying IPFS routing requests to the Amino DHT, IPNI or any other routing system that implements the same API.
Its main purpose is to help IPFS clients find provider peers for CIDs and their network addresses, and expose that as an HTTP API. This is crucial for browsers and mobile applications that need to fetch IPFS content without running a full DHT client, which is often impractical on resource-constrained devices, like mobile phones and web browsers.
An Amino DHT client is stateful, and typically opens hundreds of connections to maintain its routing table and find provider and peer records. The problem is that browsers and mobiles are limited in their networking capabilities — both in terms of the **transports** they can use and the **number of connections** they can open. Mobiles also have limited battery and bandwidth, making it impractical to run a full DHT client.
Delegated routing allows these devices to query the DHT for content providers in a single HTTP request, rather than requiring them to maintain complex DHT connections themselves.
To make decentralised retrieval possible for content provided to the DHT, Someguy serves as a helper, allowing these devices to query the DHT in a single HTTP request and get back a list of provider peers that have the data for a CID. This is done over HTTP, which is universally supported by browsers and mobile apps.
The IPFS Foundation provides a public delegated routing endpoint backed by Someguy with the URL `https://delegated-ipfs.dev/routing/v1` that is used by [Helia](https://github.com/ipfs/helia/blob/a0cac72e5b440bf7ea7356571b0f244e05c896e0/packages/http/src/utils/libp2p-defaults.ts#L31) by default to accelerate peer-to-peer content retrieval in browsers and mobile applications.
## The role of Someguy in IPFS content retrieval
When Helia or [`@helia/verified-fetch`](https://www.npmjs.com/package/@helia/verified-fetch) fetches content from the IPFS network, it goes through the following process:
1. Helia requests providers for a CID from Someguy using `Accept: application/x-ndjson` header for streaming responses: `GET https://delegated-ipfs.dev/routing/v1/providers/bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi`
2. Someguy traverses the Amino DHT and responds with the providers that have the content, _typically_ along with their network addresses.
- Example response:
```json
{
"Providers": [
{
"Addrs": [
"/ip4/12.144.75.172/tcp/4001",
"/ip4/12.144.75.172/udp/4001/quic-v1",
"/dns4/12-144-75-172.k51qzi5uqu5digdd4g1rmh3ircn34nxsehlp9ep60q96fqubc1t2604u88gin4.libp2p.direct/tcp/4001/tls/ws",
"/ip4/12.144.75.172/udp/4001/webrtc-direct/certhash/uEiCcNkDjuquRDqyq3hvbp80GeS3joyomKoMjddVSLKdYUw",
"/ip4/12.144.75.172/udp/4001/quic-v1/webtransport/certhash/uEiAUslaNVe83tW3hkVALwQUiKieQjzs77YXb4mLpo2yfJA/certhash/uEiAr6d8yeHt21X9jvRoHGwdtuLm_hDFHra0atSSCK-79HQ"
],
"ID": "12D3KooWFxAMbz588VcN4Ae69nMiGvVscWEyEoA6A3fcJxhSzBFM",
"Schema": "peer"
}
]
}
```
3. Browser/mobile app connects directly to those peers as soon as each provider record arrives in the stream, enabling parallel connection attempts and faster content retrieval
**The performance equation is straightforward**: the faster Someguy can respond with working peer addresses, the quicker browsers and mobile apps can start fetching content peer-to-peer. Every millisecond saved in routing queries directly translates to faster content delivery.
## The problem: provider records without peer addresses
Before [v0.7](https://github.com/ipfs/someguy/releases/tag/v0.7.0), Someguy would often respond with provider records that included peer IDs but **not** their network addresses. This meant that clients had to make an additional requests to `/routing/v1/peers/{peerid}` to get the actual addresses of each peer.
For example, unlike the response above, Someguy would return a response like this:
```json
{
"Providers": [
{
"Addrs": [],
"ID": "12D3KooWFxAMbz588VcN4Ae69nMiGvVscWEyEoA6A3fcJxhSzBFM",
"Schema": "peer"
}
]
}
```
### But why are providers returned without peer addresses?
The widely-used [go-libp2p](https://github.com/libp2p/go-libp2p) and [go-libp2p-kad-dht](https://github.com/libp2p/go-libp2p-kad-dht/) libraries have a couple of important constants that control how long provider and peer addresses are cached in memory:
- `DefaultProvideValidity = 48 * time.Hour`: TTL for provider records mapping between a multihash (from the CID) and peer IDs.
- `DefaultProviderAddrTTL = 24 * time.Hour`: TTL for the **addresses** of those providers. These addresses are returned in DHT RPC requests alongside the provider record. After the addresses expires, clients require an extra lookup, to find the multiaddress associated with the returned peer ID.
- `RecentlyConnectedAddrTTL = time.Minute * 15`: Time during which a peer's address is kept in memory after a peer disconnects. Applies to any libp2p peer that has been recently connected to.
In other words, DHT servers can return provider records without peer addresses. This happens in the time window 24 hours after the provider record is published until it expires. This was designed to ensure that provider records are not returned with stale addresses. Since reproviding typically happens every 24 hours, DHT servers should always have fresh addresses for providers, but reality is messier.
## The solution: caching peer addresses
[PR #90](https://github.com/ipfs/someguy/pull/90) introduces several mechanisms that ensures Someguy always returns provider records with fresh peer addresses or doesn't return the provider record at all, thereby saving clients additional peer routing requests for unroutable peers.
This is achieved through a combination of: a cached address book, active peer probing, and a cached router which augments results with addresses and filters out undialable peers.
As it turns out, caching peer addresses is pretty cheap, especially if you consider that the work to discover them will be done anyway in subsequent requests. So we end up reducing the total request rates at the cost of increasing memory consumption slightly.
### Cached address book
The [new cached address book](https://github.com/ipfs/someguy/blob/6cb37a4da3ea3379a89a184335c51370b8abb48b/cached_addr_book.go) wraps the go-libp2p [memoryAddrBook](https://github.com/libp2p/go-libp2p/blob/master/p2p/host/peerstore/pstoremem/addr_book.go) and has the following properties:
- **48-hour cache**: Stores peer addresses for 48 hours, matching the DHT provider record expiration.
- **1M peer capacity**: This sets an upper limit on memory usage, allowing Someguy to handle a large number of peers without excessive memory consumption.
- **Memory-efficient**: Uses LRU eviction to keep the most relevant peers readily available
- **Event driven cache maintenance**: Caches peers by subscribing to the libp2p event bus and caches after successful libp2p identify events, rather than actively polling the DHT for peer addresses, thereby only caching peers based on actual delegated routing requests.
### Active peer probing in the background
Rather than serving stale addresses, Someguy now tests peer connectivity in the background:
- **Background verification**: Every 15 minutes, tests whether cached peer addresses still work
- **Exponential backoff**: Stops wasting time on persistently offline peers
- **Concurrent testing**: Tests up to 20 peer connections simultaneously
- **Selective probing**: Only tests peers that haven't been verified recently
### Cached router: better responses for HTTP clients
The `cachedRouter` (`server_cached_router.go`) uses the cached address book to augment the routing results for both peer and provider requests with a non-blocking iterator:
1. **Cache-first responses**: Returns verified peer addresses immediately when available
2. **Background resolution**: If no cached addresses exist, looks up fresh ones without blocking the response
3. **Streaming results**: Sends working peer addresses as soon as they're found
4. **Fallback handling**: Omits peers that can't be reached rather than sending bad addresses
All these improvements are enabled by default in Someguy v0.7.0 and later (see the [`SOMEGUY_CACHED_ADDR_BOOK`](https://github.com/ipfs/someguy/blob/main/docs/environment-variables.md#someguy_cached_addr_book) env variable for how to disable it).
## Measuring impact
To measure the impact of these changes, we deployed two instances of someguy, one with the cached address book and active probing enabled, and the other with it disabled.
For the instance with the cached address book enabled, we realised that it took some time for the cached address book to warm up, as peers are only cached [following mutual authentication and running the identify protocol](https://github.com/ipfs/someguy/blob/316dbc27f3cfc4df1276a7afcff33f5b4f05688d/cached_addr_book.go#L176-L195) that would be initiated as a downstream effect of incoming content and peer routing requests, unless running with the accelerated DHT client, which performs a DHT crawl on startup.
To determine when the cache was sufficiently warm, we observed the cached address book size [metric](https://github.com/ipfs/someguy/blob/316dbc27f3cfc4df1276a7afcff33f5b4f05688d/cached_addr_book.go#L80-L85) and waited until it stabilised, which takes around 12 hours, at which point the cache had about 30k peers. This metric continues growing gradually —at a much slower rate— eventually stagnating at ~60k peers, which correlates with the number of DHT servers [measured by ProbeLab](https://probelab.io/ipfs/kpi/#client-vs-server-node-estimate) (measured in Q3 2025).
![cached address book size](../assets/someguy-cache/cached_addr_book_growth.png)
We then piped the last 500k CID that were requested from the public ipfs.io gateway through each instance's `/routing/v1/providers/[CID]` endpoint at a rate of 100 req/second concurrently, and examined the _cache hit rate_, which is the most important metric to measure the impact of this work.
We also looked at HTTP request latency, and HTTP success rates to get a fuller picture of the impact of this change, and to see if there were any unexpected side effects.
Note that the list of 500k CIDs was not deduplicated, this was to reflect real-world usage patterns, where popular CIDs are requested more frequently.
### Peer Address Cache effectiveness
| | Lookups | Percentage |
| ------------------------ | --------- | ---------- |
| **Address Cache Used** | 1,287,619 | 34.4% |
| **Address Cache Unused** | 2,455,120 | 65.6% |
| **Total** | 3,742,739 | 100.0% |
We measured two key metrics to assess the cache impact:
**(1) How often is the cache needed?**
In ~66% of requests, the DHT returned provider records with addresses already included. The remaining ~34% returned providers without addresses, requiring either cache lookup or additional peer routing.
**(2) When needed, how effective is the cache?**
For the 34.4% of requests that needed address resolution:
- Cache hit: **~83%** (addresses found in cache)
- Cache miss: ~17% (required fresh peer lookup)
**Bottom line:** The cache eliminates ~83% of scenarios where clients would otherwise need to make additional peer routing requests 🎉
### HTTP request latency and success rate
Here we examine the P95 (95th percentile) latency for HTTP requests to `/routing/v1/providers/[CID]` grouped by response code (200 vs 404) and the success rates measured by the ratio of 200 to 404 responses.
It's worth noting that we didn't expect significant reduction in latency or error rates as a result of the cache, because the cached address book is only used to augment results from the DHT, and doesn't change the underlying DHT query process.
| Scenario | 200s P95 | 404s P95 | Success Rate | Latency Improvement |
| ---------------------------- | -------- | -------- | ------------ | ------------------- |
| **Cache Disabled** | 1.91s R | 7.35s | 52.0% | baseline |
| **Cache Enabled and Warmed** | 1.35s | 7.46s | 57.2% | -560ms (29% faster) |
### Key insights
With peer address caching enabled, we observed unexpected improvements beyond just address availability:
- P95 latency for successful responses improved from 1.91s to 1.35s (29% faster)
- Success rate increased from 52.0% to 57.2%
These improvements likely stem from the active background probing, which pre-validates peer connectivity. When duplicate CIDs are requested, Someguy can immediately return known-good peers from cache, accelerating the routing and avoiding DHT traversal for subsequent lookups of the same content.
The results demonstrate that the cached address book and active probing have no negative impact on latency or success rates, and actually improve both metrics.
## Configuration
The cached address book and active probing can be configured through the following environment variables:
- [SOMEGUY_CACHED_ADDR_BOOK](https://github.com/ipfs/someguy/blob/main/docs/environment-variables.md#someguy_cached_addr_book)
- [SOMEGUY_CACHED_ADDR_BOOK_ACTIVE_PROBING](https://github.com/ipfs/someguy/blob/main/docs/environment-variables.md#someguy_cached_addr_book_active_probing)
- [SOMEGUY_CACHED_ADDR_BOOK_RECENT_TTL](https://github.com/ipfs/someguy/blob/main/docs/environment-variables.md#someguy_cached_addr_book_recent_ttl)
See the [docs](https://github.com/ipfs/someguy/blob/main/docs/environment-variables.md) for more details.
## Metrics
When the cached address book and active probing are enabled, Prometheus metrics to monitor the cache and active probing, which can be found in the [metrics docs](https://github.com/ipfs/someguy/blob/main/docs/metrics.md#someguy-caches)
## Additional optimization: HTTP-level caching
Beyond the peer address caching discussed above, Someguy also implements HTTP-level caching through `Cache-Control` headers. This provides a complementary layer of caching that benefits all clients, even those that don't make repeated requests themselves:
**Cache durations:**
- Provider responses with results: **5 minutes** - fresh enough to catch new providers while reducing duplicate DHT lookups
- Empty responses (no providers found): **15 seconds** - short duration allows quick discovery if content becomes available
- `stale-while-revalidate`: **48 hours** - clients can use stale data while fetching updates in the background
This HTTP caching layer works together with the peer address cache:
- The address cache ensures provider records include dialable addresses
- HTTP caching prevents redundant requests for the same CID across different clients
- CDNs and proxies can serve popular content routing responses without hitting Someguy
Together, these caching layers significantly reduce latency and server load while maintaining data freshness.
## Conclusion
The addition of peer address caching and active probing to Someguy represents a significant step forward for decentralized content retrieval in constrained environments. By **eliminating ~83% of additional peer lookups** and **reducing P95 latency by ~30%** (~560ms), these improvements make direct peer-to-peer content retrieval noticeably faster for millions of users accessing IPFS through browsers and mobile apps.
This work is available now in [Someguy releases](https://github.com/ipfs/someguy/releases) starting from v0.7.0 and is already serving production traffic at [public good](https://docs.ipfs.tech/concepts/public-utilities/#delegated-routing-endpoint) `https://delegated-ipfs.dev/routing/v1/providers`. Anyone can [run their own Someguy instance](https://github.com/ipfs/someguy?tab=readme-ov-file#install) to provide delegated routing for their users or applications. For operators, the caching feature is enabled by default and can be configured via [environment variables](https://github.com/ipfs/someguy/blob/main/docs/environment-variables.md).
Looking ahead, we continue to explore ways to make IPFS more accessible and performant for all users, regardless of their device capabilities.

View File

@@ -0,0 +1,82 @@
---
title: "How to Migrate IPFS Websites from Fleek to Modular Infrastructure"
description: "A how-to guide for future-proofing your content-addressed website hosting."
author: Marcin Rataj
date: 2026-01-06
permalink: '/2026-fleek-migration/'
canonicalUrl: https://ipshipyard.com/blog/2026-ipfs-self-hosting-migration/
header_image: '/2022-ipfs-gateways-1.png'
tags:
- kubo
- gateways
- fleek
- websites
---
_Cross-posted from the [Shipyard blog](https://ipshipyard.com/blog/2026-ipfs-self-hosting-migration/)._
This is a practical guide to hosting websites on both HTTP and IPFS using modular, swappable components. When Fleek announced it was discontinuing hosting, we migrated 15+ IPFS Project websites to a setup designed to survive any single provider shutting down. Whether you're moving off Fleek or just want more resilient hosting, this guide covers the approach and the tools we used.
## What Changed
Sites including [ipfs.tech](https://ipfs.tech), [docs.ipfs.tech](https://docs.ipfs.tech), [blog.ipfs.tech](https://blog.ipfs.tech), and [specs.ipfs.tech](https://specs.ipfs.tech) now use:
- **[GitHub Pages](https://docs.github.com/en/pages)** for Web2 HTTPS hosting (we already use GitHub for code, so no new third-party dependencies)
- **[Kubo](https://github.com/ipfs/kubo)** for CID and CAR creation (we control [content-addressing](https://docs.ipfs.tech/concepts/content-addressing/), making content portable across any provider)
- **[IPFS Cluster](https://ipfscluster.io/)** for long-term pinning and serving content to IPFS network (self-hosted by Shipyard; [pinning services](https://docs.ipfs.tech/how-to/work-with-pinning-services/) work equally well)
- **[DNSLink](https://docs.ipfs.tech/concepts/dnslink/)** for mapping CIDs to human-readable URLs (decouples naming from content location; automated via [dnslink-action](https://github.com/ipshipyard/dnslink-action))
All sites now have redundant hosting: traditional HTTP via GitHub Pages and content-addressed access via [IPFS Desktop](https://docs.ipfs.tech/install/ipfs-desktop/) with [IPFS Companion](https://docs.ipfs.tech/install/ipfs-companion/) and third-party [public IPFS gateways](https://ipfs.github.io/public-gateway-checker/).
## Third-Party Services Come and Go
Fleek Hosting was a turn-key solution that combined HTTP CDN with TLS, IPFS pinning, IPFS gateway, DNSLink, IPNS, ENS, and GitHub Actions CI integration in one platform. [Fleek is pivoting to AI](https://web.archive.org/web/20260108212232/https://www.fleek.sh/blog/2026-outlook) and [discontinuing its hosting services on January 31, 2026](https://github.com/ipshipyard/waterworks-community/issues/23).
The IPFS service landscape is always evolving. Some providers have shut down or changed focus: [nft.storage transitioned operations](https://web.archive.org/web/20250915005638/https://nft.storage/blog/nft-storage-operation-transitions-in-2025), [Infura deprecated its IPFS public API and gateway](https://web.archive.org/web/20230206190257/blog.infura.io/post/ipfs-public-api-and-gateway-deprecation), and [Scaleway shut down IPFS pinning](https://web.archive.org/web/20251130221548/https://labs.scaleway.com/en/ipfs-pinning/). At the same time, new options have emerged: [Storacha](https://storacha.network/) launched as a successor to web3.storage, Shipyard [took over Cloudflare's public IPFS gateways](https://web.archive.org/web/20251112005234/https://blog.cloudflare.com/cloudflares-public-ipfs-gateways-and-supporting-interplanetary-shipyard/), and pinning services like [Pinata](https://pinata.cloud/) and [Filebase](https://filebase.com/) continue to grow. This isn't a criticism of any particular service. Commercial offerings evolve based on business realities. The lesson: design your setup so that no single provider change requires starting over.
## Modularity as the Future-Proof Approach
IPFS is [built for robustness](https://specs.ipfs.tech/architecture/principles/#robustness): strict about verification outcomes, tolerant about methods. A hosting strategy should follow the same principle.
Decouple Web2 hosting from IPFS content-addressing. Keep each component independent:
- **HTTP**: GitHub Pages, Cloudflare Pages, or a self-hosted server
- **IPFS**: pinning/storage service, self-hosted Kubo/IPFS Cluster, or both
- **DNS**: Cloudflare, Gandi, DNSimple, Route53, or any provider with a management API
DNS serves both layers: HTTP needs A/AAAA records and TLS certificates, IPFS needs TXT records for DNSLink to map domains to CIDs.
The key: control CID and CAR creation. Creating the CAR locally means no lock-in to any provider. Pick content providers that accept the CAR. If one shuts down, upload the same CAR elsewhere. HTTP hosting and DNS stay untouched.
Compare this to an all-in-one platform. When it shuts down, everything needs rebuilding.
Two standards make this work: [CAR files](https://docs.ipfs.tech/concepts/glossary/#car) for portable content and [DNSLink](https://docs.ipfs.tech/concepts/dnslink/) for human-readable addressing. Switching providers requires no pipeline changes.
## Our Setup
We use our own [IPFS Cluster](https://ipfscluster.io/) instance since Shipyard already runs IPFS infrastructure. For most projects, a [third-party pinning service](https://docs.ipfs.tech/how-to/work-with-pinning-services/#use-a-third-party-pinning-service) works just as well with less operational overhead.
Our CI/CD uses two GitHub Actions:
- [ipshipyard/ipfs-deploy-action](https://github.com/ipshipyard/ipfs-deploy-action) creates a CID, exports the website DAG as a CAR file, uploads to IPFS Cluster or other pinning services, and provides PR preview links
- [ipshipyard/dnslink-action](https://github.com/ipshipyard/dnslink-action) automatically updates DNSLink TXT records when the CID changes
![ipshipyard/ipfs-deploy-action posts a comment on each PR with gateway preview links and CID commit status](../assets/2026-fleek-migration-pr-comment.jpg)
For security, we use a sandboxed DNS zone pattern: CI credentials can only modify DNSLink TXT records, not other DNS entries. If credentials leak, the blast radius is limited to the `_dnslink` subdomain. See the [dnslink-action security documentation](https://github.com/ipshipyard/dnslink-action?tab=readme-ov-file#security-sandboxed-dnslink-domain) for details.
## Getting Started
Already have HTTP hosting? Just add IPFS and DNSLink. Migrating from Fleek? Pick all three.
1. **HTTP**: [GitHub Pages](https://docs.github.com/en/pages) and [Cloudflare Pages](https://pages.cloudflare.com/) are simple and maintenance free. For all-in-one self-hosted HTTP+IPFS, see [Setup a DNSLink Gateway with Kubo and Caddy](https://docs.ipfs.tech/how-to/websites-on-ipfs/dnslink-gateway/)
2. **IPFS**: Choose a [pinning service](https://docs.ipfs.tech/how-to/work-with-pinning-services/#use-a-third-party-pinning-service) or run your own node. Follow [Deploy static apps to IPFS with GitHub Actions](https://docs.ipfs.tech/how-to/websites-on-ipfs/deploy-github-action/)
3. **DNS**: See [Automate DNSLink updates with GitHub Actions](https://docs.ipfs.tech/how-to/websites-on-ipfs/dnslink-action/) for TXT record automation, or use [OctoDNS](https://github.com/octodns/octodns) for more providers
The [ipfs-deploy-action](https://github.com/marketplace/actions/deploy-to-ipfs) creates the CID and exports the site as a CAR file. This makes content portable across any provider that accepts CARs. The [dnslink-action](https://github.com/marketplace/actions/dnslink-action) links CID to DNS, allowing [IPFS-enabled browsers](https://docs.ipfs.tech/install/ipfs-companion/) to load content over IPFS.
## Conclusion
Third-party services will continue to come and go. The takeaway: separate your concerns and use standards-based tooling. Keep HTTP hosting independent from IPFS content-addressing, create CARs in your own CI rather than someone else's cloud service so you can switch providers, and automate DNSLink updates so they're not tied to any particular service. When one component needs replacing, swap it out without rebuilding everything. This modularity is the robustness that IPFS enables.
All the tools we used are open source and documented. If you have questions, open an issue in the respective repositories or reach out in the [IPFS community forums](https://discuss.ipfs.tech/).

View File

@@ -0,0 +1,66 @@
---
title: "IPLD 2025 Review: From Monoliths to Modules"
description: "The year that brought us modular Rust libraries, faster DAG-CBOR, stable multiformats, and a simpler on-ramp with DASL."
author: Volker Mische
canonicalUrl: https://ipfsfoundation.org/ipld-2025-in-review/
date: 2026-01-19
permalink: '/2026-01-ipld-2025-review/'
header_image: '/2026-01-rusted-facade.jpg'
tags:
- ipld
- dasl
---
# IPLD 2025 Review: From Monoliths to Modules
_Cross-posted from the [IPFS Foundation blog](https://ipfsfoundation.org/ipld-2025-in-review/)._
Whether you're building on [IPFS](https://ipfs.tech/), [Filecoin](https://filecoin.io/), or [ATProto](https://atproto.com/), [IPLD](https://ipld.io/) (InterPlanetary Linked Data) — a shared data model for the self-certifying, content-addressable web — ensures your data is portable and verifiable across platforms. This post covers the past year's progress in the IPLD ecosystem and a preview of what to expect in 2026, with a focus on Rust IPLD.
## Rust IPLD: From Monolith to Modules
Following the [JavaScript implementation's](https://ipld.io/libraries/javascript/) lead, we recognized that most projects only need specific IPLD components rather than the full stack. Earlier this year, we successfully migrated the `libipld` functionality into separate, focused crates and [officially deprecated](https://github.com/ipld/libipld/commit/6f0028519d60078f062b1fad403e2c783ce3fb2c) the Rust implementation [`libipld`](https://crates.io/crates/libipld). This modular architecture is now the standard across all IPLD implementations.
Over the past few months, we helped migrate all actively maintained projects that had updates in the past two years. Many projects had already made the switch on their own.
**Performance win:** Moving the Python [DAG-CBOR](https://ipld.io/docs/codecs/known/dag-cbor/) library [`python-libipld`](https://github.com/MarshalX/python-libipld) to [`cbor4ii`](https://crates.io/crates/cbor4ii) and the latest [Rust `cid`](https://crates.io/crates/cid) version made [Bluesky custom feeds in Python ~2x faster](https://bsky.app/profile/marshal.dev/post/3m6wqrij2es2v).
## Migration Guide: What Replaces What
If you're a Rust developer still using `libipld`, here's your upgrade path:
### For IPLD Data Model work
**Use:** [`ipld-core`](https://crates.io/crates/ipld-core), which is similar to the deprecated [`libipld-core`](https://crates.io/crates/libipld-core).
### For encoding/decoding
**Old way:** Custom `libipld` traits for [DAG-CBOR](https://ipld.io/docs/codecs/known/dag-cbor/), [DAG-JSON](https://ipld.io/docs/codecs/known/dag-json/), and [DAG-PB](https://ipld.io/docs/codecs/known/dag-pb/). [`libipld-cbor-derive`](https://crates.io/crates/libipld-cbor-derive) for IPLD Schema-like transformations.
**New way:** Serde-based crates that go directly from serialization to native Rust types without the IPLD Data Model conversion in between:
- [`serde_ipld_dagcbor`](https://crates.io/crates/serde_ipld_dagcbor/) for DAG-CBOR
- [`serde_ipld_dagjson`](https://crates.io/crates/serde_ipld_dagjson/) for DAG-JSON
- [`ipld-dagpb`](https://crates.io/crates/ipld-dagpb) for DAG-PB (not Serde-based since DAG-PB doesn't support the full IPLD Data Model)
[IPLD Schema-like transformations](https://ipld.io/docs/schemas/features/representation-strategies/) can now be done directly with [Serde attributes](https://serde.rs/attributes.html).
**Adoption in the wild:** `serde_ipld_dagcbor` is now widely used in the Rust ATProto community, including [rsky](https://github.com/blacksky-algorithms/rsky) (AT Protocol implementation in Rust), [ATrium](https://github.com/atrium-rs/atrium), and [jacquard](https://tangled.org/nonbinary.computer/jacquard) (ATProto/Bluesky libraries).
## IPLD Schemas
[@rvagg](https://github.com/rvagg/) made a [big upgrade to the code generation of IPLD Schemas](https://github.com/ipld/js-ipld-schema/pull/135). When you define a schema, you can now generate code for Go, Rust, and TypeScript.
## Multiformats
The Rust [multiformats](https://multiformats.io/) implementations are under active maintenance and all actionable items on [`cid`](https://crates.io/crates/cid), [`multihash`](https://github.com/multiformats/rust-multihash), and [`multibase`](https://crates.io/crates/multibase) have been resolved.
Rust multiformats now joins Go and JS in being stable and production-ready, and you can expect mostly minor dependency updates in 2026.
## DASL: Starting Simple, Staying Compatible
Not every project needs IPLD's full flexibility. [DASL (Data Addressable Structures and Links)](https://dasl.ing/) offers a streamlined subset: fewer decisions, fewer dependencies, easier to implement. We worked to ensure the DASL specifications remained a strict subset of IPLD, so data created with DASL tools remain seamlessly compatible with the broader IPLD ecosystem.
## Thank You
Special thanks to [@Stebalien](https://github.com/Stebalien/) and [@rvagg](https://github.com/rvagg/) for their countless hours helping maintain various IPLD and multiformats libraries.

View File

@@ -0,0 +1,81 @@
---
title: "Content-Addressing: A Year In Review"
description: "Let's take a look at what happened in content addressing in 2025 — it's a lot!"
author: Robin Berjon
canonicalUrl: https://ipfsfoundation.org/content-addressing-2025-in-review/
date: 2026-01-15
permalink: '/2026-01-year-in-review/'
header_image: '/2026-01-sunrise-sea01.jpg'
tags:
- ecosystem
---
It's hard to believe that it was 2025 only two weeks ago, but all the same we'd like to wrap the year up tidily and look back at what happened in content addressing leading up to 2026\!
"Content addressing?" you say. "Is there enough going on around content addressing to write a year in review post?" Content addressing has many uses, but two salient ones include trusting that you're getting the data you really want and ensuring that data can be independently verified without relying on the power of a centralized authority. It's easy to see how those two features are key to facing today's challenges. Over the past decade, the IPFS community has been at the forefront of making content addressing practical and accessible. Today, thousands of projects build on it, from decentralized websites and scientific data repositories to verifiable archives and supply chains.
The IPFS project began as an integrated full stack—content addressing, data formats, and peer-to-peer networking bundled together. Over time, it has evolved into a suite of technologies that work well together but also make sense independently.
This post focuses on one pillar: **content addressing** — the building blocks that let you identify, verify, and link data by what it contains. These tools (IPLD, multiformats, CIDs) are network-agnostic: you can use them with peer-to-peer systems, client-server architectures, or anything in between. The other historic pillar of IPFS, peer-to-peer networking, is a story for a future post.
## Modularity
Closer to home, about a year ago we said that we wanted to focus on making the IPFS technology suite more modular, adopting the principle that it should operate more in line with the good old-fashioned tenets of Unix philosophy: small tools strung together to assemble great power (if not always with great responsibility). For all that it may be accepted wisdom that the best-laid plans of mice and men often go awry, looking back over 2025 we're delighted to see that this one panned out.
In truth, the community was already ahead of us here, quietly shipping tight, purpose-built libraries for CAR, IPLD, and other primitives in the IPFS family. Together, over the past year, weve made real progress on modularity through community-wide efforts spanning standards work, new specifications, and rethinking how our core libraries are structured.
And dawg, how best to show off modularity by giving you [a year-in-review post (about Rust IPLD)](https://blog.ipfs.tech/2026-01-ipld-2025-review/) in this year-in-review post? Check out how we got [a 50% speed improvement](https://bsky.app/profile/marshal.dev/post/3m6wqrij2es2v) in the Python wrapper around the Rust lib and other great boosts from migrating off of the old libipld and to more modular implementations.
This year also saw lots of action in the [DASL](https://dasl.ing/) space. If you don't know DASL, it's the part of the IPFS family that's laser-focused on adoption, interoperability, and web-style systems that need to be resilient in the face of dubious code, open-ended systems, and — *gasp* — potentially high volumes of users. DASL is all about modularity and tiny specs that mesh together like small Unix tools. (Read [our introduction to DASL](https://ipfsfoundation.org/dasl-a-simple-way-to-reference-digital-content/) from earlier this year.) In addition to simple subsets of [CIDs](https://dasl.ing/cid.html), [CAR](https://dasl.ing/car.html), and DAG-CBOR (aka [DRISL](https://dasl.ing/drisl.html)), DASL also supports HTTP retrieval ([RASL](https://dasl.ing/rasl.html)), packaging metadata ([MASL](https://dasl.ing/masl.html)), and bigger data ([BDASL](https://dasl.ing/bdasl.html)). And before you ask: yes, we do have a resident acronym expert on staff.
We're driving interoperability between implementations thanks to the [amazing test suite](https://hyphacoop.github.io/dasl-testing/) that the [Hypha Worker Co-Operative](https://hypha.coop/) built based on an IPFS Foundation grant. (They [wrote about the testing work](https://ipfsfoundation.org/hypha-dasl-the-test-suite/) on this very blog.) Looking at test suites over the weeks has been cause for celebration: where initially there had been red everywhere — because *no one* writes perfectly interoperable code without a test suite — we can see green growing fast with increasing alignment across the board. And this very interoperability has made it possible for us to submit an Internet Draft lovingly titled [*The tag-42 profile of CBOR*](https://ipfs-tech.github.io/cbor42/draft-caballero-cbor-cbor42.html) (after the CBOR tag for CIDs) to the IETF. This draft covers DASL CIDs and DRISL, and is particularly interesting in the context of ongoing discussions to standardize higher-level parts of the AT protocol at the IETF, like the “repository sync” operated over (DRISL-encoded) personal data servers by relays polling them for recent changes.
We hold monthly virtual meetings for the content addressing community, alternating between CID Congresses (presentations and discussions) and DASLing groups (hands-on working sessions). Theyre always on the 3rd Thursday of each month subscribe to the [CID Congress Luma calendar](https://luma.com/cid-congress) to join.
## Ecosystem Tooling
One of the joys of working on a truly open ecosystem is that you get genuine surprises. In July a new IPFS client, identifying itself as P2Pd and written in Python, that none of us had ever heard about launched and within only a few days skyrocketed to power 15% of the IPFS public network (Amino), now stabilizing near 10%.
Also within the Python community, growing number of geospatial projects have been using IPFS and contributing new tooling. One example is the [ORCESTRA Campaign](https://orcestra-campaign.org/intro.html), an international field study of tropical convection over the Atlantic Ocean. It generated large volumes of observational data from aircraft, ships, and ground stations to study how tropical storms form and organize. ORCESTRA chose IPFS as their distributed storage layer to make datasets immediately accessible, verifiable, and resilient to single points of failure — addressing pain points from previous field campaigns where centralized systems were too slow for day-to-day scientific work. [ipfsspec](https://pypi.org/project/ipfsspec/) brings verified IPFS retrieval to the Python data science ecosystem by implementing the [fsspec](https://filesystem-spec.readthedocs.io/) interface, the same abstraction layer used by xarray, pandas, Dask, and Zarr for remote data access.
The University of Marylands EASIER (Efficient, Accessible, and Sustainable Infrastructure for Extracting Reliable) Data Initiative has also released [ipfs-stac](https://pypi.org/project/ipfs-stac/) v0.2.0, a pivotal tool for onboarding and interfacing with geospatial data via [STAC](https://stacspec.org/en) APIs on IPFS.
Of note is how they have a straightforward tooling reuse approach in which they prepare data to work with IPFS and integrate with Kubo, as can be observed in [their codebase](https://github.com/orcestra-campaign). We've been interested in IPFS tooling for geospatial work and for scientific data in general, and this is as good an example as they get.
One thing that content addressing is useful for is data management, notably data provenance and attaching verifiable attestations to data sets for governance or compliance purposes. Within that space, we've been impressed with [EQTYLab's product suite](https://www.eqtylab.io/) that uses IPFS primitives for precisely those purposes. It simply looks slick and eminently usable.
And lest we forget: Bluesky blew past 40 million users this year, the growing AT ecosystem has over 400 apps with daily activity, and the community has shipped many libraries for all major languages to work with AT data and protocol components. Not too shabby for a content-addressed social network. (See our write-up of [AT and the Eurosky event](https://ipfsfoundation.org/ipfs-at-eurosky-live-berlin-highlights-from-a-bright-future/) from November.)
## Performance and Usability
It's been a great year for Kubo, shipping seven major releases up to the latest and shiniest [v0.39](https://github.com/ipfs/kubo/releases/tag/v0.39.0). But the number of releases is less impressive than what was in them and if it feels like you're breathing dust right now it might be from Kubo's radical performance improvements. The DHT system was rebuilt from the ground up and [the new "sweep" provider](https://ipshipyard.com/blog/2025-dht-provide-sweep/) is able to efficiently provide hundreds of thousands of CIDs without tickling your memory or risking open warfare with your ISP. This joins Bitswap improvements that have demonstrated 50-95% bandwidth improvements and 80-98% message volume reduction in testing. Even better, those CIDs can now be served directly from your node to browsers using [AutoTLS](https://blog.libp2p.io/autotls/), automatically setting up the certificate needed to make Secure WebSocket connections work in many places they could not previously. Conversely, Kubo is now able to fetch from HTTP so that you can use battle-tested HTTP infrastructure to serve content to IPFS networks.
Helia scored a big win shipping [verified fetch](https://www.npmjs.com/package/@helia/verified-fetch), a drop-in replacement for the classic Fetch API that verifies data for you. In turn, verified fetch powers the mighty [Service Worker Gateway](https://github.com/ipfs/service-worker-gateway) which is a key component that will allow us to phase out HTTP gateways entirely very soon. This makes IPFS all the more usable in the browser, without the end-user needing to manually install anything.
Iroh too had a rocking year with no fewer than 19 releases (and there I was thinking Rust made shipping hard…) and adding over 4,500 GitHub stars inside of 2025\. They added support for many protocols, including live audio/video (working with [Streamplace](https://stream.place/)\!) and many of those like gossip or blobs compile to WASM and run in the browser. The community growing around Iroh is nothing short of amazing, having brought us [Typescript bindings](https://github.com/rayhanadev/iroh-ts), [Alt-Sendme](https://github.com/tonyantony300/alt-sendme) (with 4,500 stars of its own in Q4 alone\!), high-performance end-to-end testing platform [Endform](https://endform.dev/), wallet [Fedi](https://www.fedi.xyz/), or [Strada](https://strada.tech/), a collaborative suite for creative teams that need high-speed access to massive media content.
And the standards appreciators among you have some meaty, perhaps even gamey from long maturation, specs to sink your teeth into: the [UnixFS](https://specs.ipfs.tech/unixfs/) format used by most IPFS systems that expose some form of file system abstraction and [Kademlia DHT](https://specs.ipfs.tech/routing/kad-dht/), which describes how DHT-based IPFS networks make it possible for nodes to find content from one another.
We also have the [CID Profiles](https://github.com/ipfs/specs/pull/499) specification almost, *almost* finished. CIDs have a lot of options, which is great whenever you need to take a Swiss Army chainsaw to your content identifiers but can make it challenging to get two people to generate the same CIDs (and therefore to verify content) as they need to be using exactly the same options. Profiles solve this by listing all the possible options so that they can be easily shared between parties that need to talk.
## Events With A Wide Community
One of our areas of focus this year (and we're not about to stop\!) was to find out more about how our stack or content addressing in general could be used to solve problems that people have across the board, from syncing data faster to helping save democracy with more governable protocols (which, yes, content addressing does *help* with). We do this by meeting people where they are, learning about the problems you have across many domains and walks of life.
This included our own [Hashberg](https://ipfs.fyi/hashberg) event in Berlin of course, but also so much more. We did attend a number of web3 events, such as the excellent [ProtocolBerg](https://protocol.berlin/), [LabWeek](https://labweek.io/), and [DevConnect](https://devconnect.org/), but we deployed more energy connecting with the wider world. This included hacking heavies like the [Local-First Conference](https://www.localfirstconf.com/), [FOSDEM](https://archive.fosdem.org/2025/), and [Web Engines HackFest](https://webengineshackfest.org/), as well as more ecosystem-like events like [Eurosky](https://www.eurosky.social/), [DecidimFest](https://meta.decidim.org/conferences/DecidimFest25), the [Cypherpunk Camp](https://cypherpunk.camp/), and classics like [Re:publica](https://re-publica.com/en) or [MozFest](https://www.mozillafestival.org/en/). We also hopped over to Japan to see how content addressing might work with the [Originator Profile](https://originator-profile.org/en-US/). We rubbed elbows with research and standards communities at the [Dagstuhl Seminar](https://www.dagstuhl.de/en/seminars/seminar-calendar/seminar-details/25112), the [Public AI Retreat](https://publicai.network/), and of course the [IETF](https://www.ietf.org/).
We also went well outside of tech and into the real world at [RightsCon](https://www.rightscon.org/), the French [AI Summit](https://www.elysee.fr/en/sommet-pour-l-action-sur-l-ia) and its [FreeOurFeeds](https://freeourfeeds.com/) side event, the [IGF](https://www.intgovforum.org/en) notably with its workshop on social media infrastructure, the [Summit on European Digital Sovereignty](https://bmds.bund.de/aktuelles/eu-summit), and the [UNDP](https://www.undp.org/)'s event on governance innovation. It's been a whirlwind of a year and we've learned a lot that continues to inform our work.
We'll be announcing more in 2026 but you can already catch us speaking at FOSDEM in early February (both [Mosh](https://fosdem.org/2026/schedule/event/TRQ9LV-decentralized-to-doorsteps/) and [Robin](https://fosdem.org/2026/schedule/event/W8CJXD-dasl/)), as well as at [the AT Proto meetup on the Friday prior](https://luma.com/jj7nths0), and [ATmosphereConf](https://ti.to/atmosphereconf/atmosphereconf2026) in March. And look for Volker speaking on An Open, Decentralized Network for Metadata (in German) at [FOSSGIS Göttingen](https://www.fossgis-konferenz.de/2026/)\!
## Looking Ahead
As we ride into the reddened sunrise of 2026, looking ominously stylish as four horsepeople are wont to, we already have a batch of goodies we've been preparing.
We want to extend the capabilities of our current content-addressing stack, especially for large data. Watch out for exciting announcements in the pipeline around geo data and verifiable range requests that work over vanilla HTTP. We're also continuing our partnership with the always-brilliant [Igalia](http://igalia.com/) and hope to bring a number of improvements to *ye aulde* browsers, notably for streaming verification.
We've also been talking with our friends at [Streamplace](https://stream.place/) about collaborating on specs for a usable subset of [C2PA](https://c2pa.org/) and deterministic MPEG-4 containers so that you can watch content-addressable videos about content addressing. Another potential collaboration with secure chat providers and others who'd like to align on web-app containers might happen. It's still early days, we'll be sure to keep you posted as soon as there's even just a public description of the problem that covers more than me being a tease about it.
Overall, we'll be bringing more of the same. We'll keep working on modularization, interoperability, and adoption. We'll keep investing in test suites and implementations as needed. We'll keep pushing the IPFS family of technologies forward until it's so consistently easy to use that you stop noticing it entirely, until it's so straightforward you need not think about anything other than the specific problem you wish to solve.
Finally, the most important thing that we look forward to in 2026 is your participation. Everything we did in 2025 was to make things better for you and was always informed by what we heard from people or observed in the wild. Next year will be no different — but for that to work we need to hear from you\! There's many ways you can reach out: you can post on [the forum](https://discuss.ipfs.tech/), you can hit [@ipfs.tech](https://bsky.app/profile/ipfs.tech) up on the Bluesky, you can open an issue on the relevant repo, come talk to us at an in-person event, or join any of the meetings on the [IPFS Calendar](https://luma.com/ipfs) that strikes your fancy. The rumors are true: we do bite; but we only bite the bad people, so come talk\!

View File

@@ -1,7 +1,8 @@
---
date: 2025-02-28
permalink: /2025-could-ipfs-prevent-bybit-hack/
title: 'Could IPFS have prevented the Bybit hack?'
canonicalUrl: https://ipshipyard.com/blog/2025-could-ipfs-prevent-bybit-hack/
title: 'Could IPFS Have Prevented the Bybit Hack?'
description: 'The Bybit hack revealed several security failures, this post examines whether IPFS could have helped prevent the hack and practical solutions for dapp developers.'
author: Daniel Norman, Marcin Rataj
header_image: /2022-ipfs-gateways-1.png
@@ -13,7 +14,7 @@ tags:
## The Bybit Hack and IPFS
Bybit's [recent hack](https://www.reuters.com/technology/cybersecurity/cryptos-biggest-hacks-heists-after-15-billion-theft-bybit-2025-02-24/), which resulted in the loss of $1.4, is a reminder of the importance of verification for frontends, especially dapp frontends in the Web3 ecosystem.
Bybit's [recent hack](https://www.reuters.com/technology/cybersecurity/cryptos-biggest-hacks-heists-after-15-billion-theft-bybit-2025-02-24/), which resulted in the loss of $1.4 billion, is a reminder of the importance of verification for frontends, especially dapp frontends in the Web3 ecosystem.
Based on what we know at the time of writing, IPFS, through local verification, could have served as a preventive line of defense in this sophisticated hack, potentially preventing it altogether.

View File

@@ -4,6 +4,29 @@ type: Ecosystem content
sitemap:
exclude: true
data:
- title: 'Shipyard 2025: IPFS Year in Review'
description: 'Seven Kubo releases made self-hosted IPFS practical. Highlights: DHT Provide Sweep, AutoTLS, HTTP retrieval, and Helia for trustless browser retrieval.'
date: 2025-12-19
publish_date:
card_image: /blog-post-placeholder.png
path: https://ipshipyard.com/blog/2025-shipyard-ipfs-year-in-review/
tags:
- kubo
- boxo
- helia
- gateways
- DHT
- AutoTLS
- delegated routing
- browsers
- title: 'Provide Sweep: Solving the DHT Bottleneck for Self-Hosting IPFS at Scale'
date: 2025-11-26
publish_date:
card_image: /blog-post-placeholder.png
path: https://ipshipyard.com/blog/2025-dht-provide-sweep/
tags:
- DHT
- kubo
- title: 'libp2p at IPFS þing 2023 Recap'
date: 2023-05-11
publish_date:

112
src/_blog/newsletter-205.md Normal file
View File

@@ -0,0 +1,112 @@
---
title: "🌳 IPFS Newsletter 205: HTTP, P2P in browsers, Kubo speedup & more"
description: "The IPFS Newsletter is back, with many exciting updates to share: HTTP support across the IPFS stack, P2P in browsers, Kubo speedup, and more."
date: 2025-05-23
permalink: "/newsletter-205"
header_image: "/ipfsnews.png"
tags:
- newsletter
---
Welcome back to the IPFS Newsletter! After a hiatus, we have many exciting updates to share.
### More HTTP Support Across the IPFS Stack
Multiple IPFS libraries are embracing or adding support for HTTP (usually in addition to Bitswap). Benefits include lower data provision costs, easier integration with existing HTTP libraries and services, and seamless web compatibility.
- [Kubo](https://github.com/ipfs/kubo) added support for trustless HTTP retrieval on an opt-in basis in [v0.35](https://github.com/ipfs/kubo/releases/tag/v0.35.0).
- [Rainbow](https://github.com/ipfs/rainbow), the high performance HTTP Gateway implementation, added support for trustless HTTP retrieval in [v1.12](https://github.com/ipfs/rainbow/releases/tag/v1.12.0).
- Helia, [@helia/verified-fetch](https://github.com/ipfs/helia-verified-fetch) and the [Service Worker Gateway](https://github.com/ipfs/service-worker-gateway) already support trustless HTTP retrieval.
- [RASL](https://dasl.ing/rasl.html) includes a simple HTTP-based retrieval method.
The next step is adding support for HTTP providing to the DHT ([issue #496](https://github.com/ipfs/specs/issues/496)). This would let nodes announce themselves as HTTP providers alongside or instead of Bitswap.
### Service Worker Gateway Provides P2P Capabilities in the Browser
The [Service Worker Gateway](https://github.com/ipfs/service-worker-gateway) is a browser-based IPFS gateway that uses Service Workers to handle p2p retrieval, hash verification, and other IPFS functionality. Try it out at [inbrowser.link](https://inbrowser.link).
The Service Worker Gateway has been getting a lot of love recently: [v1.12](https://github.com/ipfs/service-worker-gateway/releases/tag/v1.12.0) includes configurable timeouts, better error pages, and a signed binary for local deployment. For a deep dive, check out the [Service Workers for IPFS on the Web](https://youtu.be/qtIJXRgxjVA?feature=shared) video. ([Shipyard](https://ipshipyard.com/))
### Drop-in Service Worker Example for App Developers
Here's a [drop-in service worker example](https://github.com/ipshipyard/drop-in-service-worker). It intercepts hardcoded requests to centralized gateways, using [@helia/verified-fetch](https://github.com/ipfs/helia-verified-fetch) to retrieve and verify content directly from peers. ([Shipyard](https://ipshipyard.com/))
### IPNI Service Update
The [IPNI](https://docs.ipfs.tech/concepts/ipni/), a content routing index for large content providers, suffered service degradation in April, disrupting the ability to find providers for CIDs. The IPNI team has made hardware and software improvements to avoid future disruptions, and service is improving as the newly-upgraded indexers catch up.
In the interim, a [new feature](https://github.com/ipfs/someguy/pull/110) in [Someguy](https://github.com/ipfs/someguy) allows large content providers to run a self-hosted [HTTP delegated routing](https://specs.ipfs.tech/routing/http-routing-v1) endpoint, providing an immediate remedy until IPNI service was restored.
Join the `#ipni` channel on the [Filecoin Slack](https://filecoin.io/slack) to follow along. A Content Routing WG will be meeting biweekly. More: [background](https://hackmd.io/sRmr-vnPRH2THaPxMIoKjA) & [latest notes](https://hackmd.io/Zxem7bVBRB6ZVDnaqS_kmw).
### 20-40x Speedup for Data Onboarding in Kubo
In the past, adding data to Kubo with `ipfs add` while Kubo was running was slow due to inefficient provider queue handling. A [new optimization](https://github.com/ipfs/boxo/pull/888) in Boxo yields a 20-40x speedup (higher for larger datasets), making it easier to onboard large data sets while Kubo is running. Available in [Kubo v0.35](https://github.com/ipfs/kubo/blob/release-v0.35.0/docs/changelogs/v0.35.md). ([Shipyard](https://ipshipyard.com/))
## Protocol and Standards
### DASL and IETF Draft for CBOR/c-42
[DASL](https://dasl.ing) (Data-Addressed Structures & Links) is a small set of specs for working with content-addressed, linked data. First released in December 2024, DASL now includes sub-specs for encoding (CID and dCBOR42, which are strict subsets of IPFS CIDs and IPLD), metadata ([MASL](https://dasl.ing/masl.html)), and retrieval ([RASL](https://dasl.ing/rasl.html)) of content addressed data.
[The tag-42 profile of CBOR Core](https://datatracker.ietf.org/doc/draft-caballero-cbor-cborc42/) was submitted as an IETF Draft on 22 May, paving the way for web-wide standardization of CBOR/c-42 and CIDs. (IPFS Foundation)
### Practical Interoperability for CIDs
The original [CID specification](https://github.com/multiformats/cid) was designed for flexibility and future-proofing, supporting various encodings, graph widths, and optimizations. In practice, this flexibility yields multiple CIDs for the same input, making it challenging to establish CID equivalency for the same data across implementations.
Efforts are underway to increase practical interop without losing futureproofing: [IPIP-499: CID Profiles](https://github.com/ipfs/specs/pull/499) proposes a set of standard profiles for UnixFS, and [Kubo v0.35](https://github.com/ipfs/kubo/releases/tag/v0.35.0) adds [new config options](https://github.com/ipfs/boxo/pull/906) towards this goal. For more context, see the lively [forum thread](https://discuss.ipfs.tech/t/should-we-profile-cids/18507).
### Amino DHT Spec
The Amino DHT is a distributed key-value store used for peer and content routing records within IPFS Mainnet. It extends the libp2p Kademlia DHT with IPFS-specific features, such as CIDs and IPNS records. Until recently, it had no formal spec beyond the [libp2p Kademlia DHT spec](https://github.com/libp2p/specs/blob/master/kad-dht/README.md).
[PR #497](https://github.com/ipfs/specs/pull/497) addresses this gap with the goal of improving interoperability, security, and clarity across implementations. ([Shipyard](https://ipshipyard.com/))
## Code and Tools
### 🚢 Releases
- [kubo 0.35](https://github.com/ipfs/kubo/blob/master/docs/changelogs/v0.35.md) & [0.34](https://github.com/ipfs/kubo/blob/master/docs/changelogs/v0.34.md) — Lots of new features, including opt-in HTTP retrieval, new data import options that help with CID equivalency, easi, [AutoTLS](https://blog.libp2p.io/autotls/), and performance improvements to bitswap, providing, and data onboarding commands. `ipfs add` is now 20-40x faster.
- [helia 5.4.1](https://www.npmjs.com/package/helia) — New usability improvements to the [`unixfs.stat` command](https://github.com/ipfs/helia/pull/760), and a [new option allowing finer control](https://github.com/ipfs/helia/pull/772) in how gateways are picked for block retrieval. Additionally, a bug fix in js-libp2p ensures abort signals passed to network operations are properly handled.
- [IPFS Cluster v1.1.4](https://github.com/ipfs-cluster/ipfs-cluster/releases/tag/v1.1.4) — A maintenance release fixes the IPFS Cluster Docker image for arm64 architectures.
- [Rainbow v1.13](https://github.com/ipfs/rainbow/releases/tag/v1.13.0) & [v1.12](https://github.com/ipfs/rainbow/releases/tag/v1.13.0) — Support for HTTP retrieval and a new option to control http providers.
- [Boxo v0.30.0](https://github.com/ipfs/boxo/releases/tag/v0.30.0) — The reference library shared by Kubo and Rainbow adds support for custom UnixFS DAG width and the ability to enable/disable the bitswap server.
- [Someguy v0.9.1](https://github.com/ipfs/someguy/releases/tag/v0.9.0) — The Delegated Routing API server implementation adds support for probing HTTP gateway endpoints and returning those as providers.
- [Service Worker Gateway v1.12](https://github.com/ipfs/service-worker-gateway/releases/tag/v1.12.0) — Configurable timeouts, useful debug info on error pages, and more.
### Ecosystem Spotlights
- [Helia 101 examples for Node.js](https://github.com/ipfs-examples/helia-examples/tree/main/examples/helia-101) is overhauled with many new examples: getting started with Helia, pinning, IPNS, and more.
- [iroh v0.35](https://www.iroh.computer/blog/iroh-0-35-prepping-for-1-0) — The last planned version before the 1.0 release candidate later this year.
- [Seed Hypermedia](https://seed.hyper.media), an open protocol and app for authorship and collaboration, published [a new blog post](https://seed.hyper.media/blog/collaborating-on-the-web-with-seed-hypermedia-protocol-and-ipfs) describing core principles and new features in the [Seed Hypermedia App](https://seed.hyper.media/hm/download), which features a clean, thoughtfully designed interface.
- [Peergos 1.3](https://github.com/Peergos/web-ui/releases/tag/v1.3.0) — the p2p, secure file storage, social network and application protocol releases a new sync gui and api for managing the sync client.
- Good news for WebTransport: `serverCertificateHashes`, a feature in the [WebTransport](https://blog.ipfs.tech/2024-shipyard-improving-ipfs-on-the-web/#webtransport) spec, necessary for browsers to connect to IPFS nodes over WebTransport without CA-signed TLS certs, was considered for removal. After a [lengthy discussion, the WebKit team agreed to implement it](https://github.com/w3c/webtransport/issues/623#issuecomment-2895955428), which means Safari users will also benefit from direct WebTransport connections to IPFS nodes.
- [TeaTime](https://github.com/bjesus/teatime) is a static distributed library system powered by IPFS, SQLite and GitHub.
- [js-blockstore-opfs](https://github.com/dozyio/js-blockstore-opfs) is an [Origin Private File System (OSPF)](https://developer.mozilla.org/en-US/docs/Web/API/File_System_API/Origin_private_file_system) TS/JS blockstore implementation for use with Helia and js-libp2p in the browser. ([@dozyio](https://github.com/dozyio))
- [Distributed Press](https://distributed.press/), a publishing tool for the distributed web, is [migrating to Helia](https://github.com/hyphacoop/api.distributed.press/pull/101).
## Services and Providers
- [Filebase launches IPFS RPC API Support](https://filebase.com/blog/introducing-support-for-the-ipfs-rpc-api) with Kubo-compatible endpoints to simplify integration with existing tools -- no node management required. ([Docs](https://docs.filebase.com/api-documentation/ipfs-rpc-api)).
- [Filebase launches Real-Time Gateway Activity Streams](https://filebase.com/blog/introducing-ipfs-gateway-activity-streams) (v0), providing real-time visibility into IPFS gateway traffic, including IPs and status codes.
- [Bluesky Backups by Storacha](https://bsky.storage/): This beta webapp [saves regular snapshots](https://www.youtube.com/watch?v=CIym-b-DA5s) of your ATProto data and installs a recovery key into your [DID PLC profile](https://github.com/did-method-plc/did-method-plc), bringing true credible exit to Bluesky. [Github repo](https://github.com/storacha/bluesky-backup-webapp-server).
### Articles and Tutorials
- 🎥 [Deploy Static Apps and Websites to IPFS with Github Actions](https://www.youtube.com/watch?v=ZRrPBqqFKFU). Whether you're using React, Vuepress, Astro, Next.js, or any other static site generator, the [IPFS Deploy Action](https://github.com/marketplace/actions/deploy-to-ipfs) will help you get your web application deployed on IPFS. Here's the [docs page](https://docs.ipfs.tech/how-to/websites-on-ipfs/deploy-github-action/#what-is-the-ipfs-deploy-action) and [video](https://www.youtube.com/watch?v=ZRrPBqqFKFU). (Daniel Norman, Shipyard)
- 🎥 [Service Workers for IPFS on the Web](https://youtu.be/qtIJXRgxjVA?feature=shared). Deep dive into Service Workers, how they help IPFS on the Web, and how to use Service Workers today for verified peer-to-peer retrieval on the Web. (Daniel Norman, Shipyard)
- 📘 [Setup a DNSLink Gateway to serve static sites on IPFS with Kubo and Caddy](https://docs.ipfs.tech/how-to/websites-on-ipfs/dnslink-gateway/).
- [Smaller Hash BTrees](https://piss.beauty/post/smaller-hash-btrees) — Insightful blog post delving into optimization techniques to reduce the size of BTree indices when storing CIDs (using a real dataset from [ATProto](https://atproto.com/guides/glossary#cid-content-id)) in a PostgreSQL database. ([Stellz](https://bsky.app/profile/piss.beauty))
## Community & Events
- [Grantees Announced for Spring 2025 IPFS Utility Grants](https://blog.ipfs.tech/2025-05-grants/) — 3 grantees were selected: `rsky-satnav` CAR Explorer (Rudy Fraser, Blacksky), CAR Indexing Tools (Ben Lau, Basile Simon, & Yurko Jaremko, Starling Lab), and DASL Interop Testing (Cole Anthony Capilongo, Hypha Co-op), who will be presenting their work at [CID Congress #3](https://lu.ma/ofjr7mgd).
- [USER * AGENTS * BERLIN](https://lu.ma/v457jxp2?tk=8UZBKL) (May 29-30, Berlin) — Chat or cowork with people interested in maximizing user agency in everyday software, and meet long-time contributors to the IPFS ecosystem.
- [Hashberg: A Content Addressing Architectures Summit](https://lu.ma/nbv106v5) (June 11, Berlin) — An intimate, 1-day event to collaborate on critical topics across the IPFS ecosystem.
- [Protocol Berg v2](https://protocol.berlin/) (June 12-13, Berlin) — Several talks on IPFS.
- [JS Nation 2025](https://jsnation.com/#person-daniel-norman) (June 16, Virtual) — "Demystifying IPFS: A Web Developer's Guide to Content Distribution"
- [CID Congress #3](https://lu.ma/ofjr7mgd) (June 25, Virtual)
If you made it this far, thanks for reading!

View File

@@ -1,5 +1,72 @@
---
data:
- title: 'Just released: Kubo 0.39.0!'
date: "2025-11-27"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.39.0
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.38.2!'
date: "2025-10-30"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.38.2
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.38.1!'
date: "2025-10-08"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.38.1
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.38.0!'
date: "2025-10-02"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.38.0
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.37.0!'
date: "2025-08-27"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.37.0
tags:
- go-ipfs
- kubo
- telemetry
- autoconf
- title: 'Just released: Kubo 0.36.0!'
date: "2025-07-14"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.36.0
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.35.0!'
date: "2025-05-21"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.35.0
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.34.1!'
date: "2025-03-25"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.34.1
tags:
- go-ipfs
- kubo
- AutoTLS
- title: 'Just released: Kubo 0.34.0!'
date: "2025-03-20"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.34.0
tags:
- go-ipfs
- kubo
- AutoTLS
- title: 'Announcing AutoTLS: Bridging IPFS and the Web'
date: "2025-02-17"
publish_date: null

View File

@@ -1,6 +1,7 @@
---
date: 2024-11-25
permalink: /2024-shipyard-improving-ipfs-on-the-web/
canonicalUrl: https://ipshipyard.com/blog/2024-shipyard-improving-ipfs-on-the-web/
title: 'IPFS on the Web in 2024: Update From Interplanetary Shipyard'
description: 'Update from Interplanetary Shipyard on our efforts to make IPFS work on the Web.'
author: Daniel Norman

View File

@@ -6,6 +6,7 @@ author: Adin Schmahmann
date: 2024-04-08
permalink: '/shipyard-hello-world/'
canonicalUrl: https://ipshipyard.com/blog/shipyard-hello-world/
header_image: '/shipyard-hello-world.png'
tags:
- 'ipfs'

View File

@@ -4,6 +4,26 @@ type: Video
sitemap:
exclude: true
data:
- title: 'Service Workers for IPFS on the Web'
date: 2025-05-21
publish_date: 2025-05-21T10:00:00+00:00
path: https://www.youtube.com/watch?v=qtIJXRgxjVA
tags:
- IPFS Deploy Action
- tutorial
- IPFS Gateway
- Service Worker Gateway
- dapps
- title: 'Deploy Static Apps and Websites to IPFS with GitHub Actions'
date: 2025-04-04
publish_date: 2025-04-04T10:00:00+00:00
path: https://www.youtube.com/watch?v=ZRrPBqqFKFU
tags:
- IPFS Deploy Action
- tutorial
- guide
- dapps
- GitHub Actions
- title: 'Debugging CID Retrievability With IPFS Check'
date: 2024-09-04
publish_date: 2024-09-04T12:00:00+00:00

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 635 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 705 KiB

After

Width:  |  Height:  |  Size: 522 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

After

Width:  |  Height:  |  Size: 57 KiB

BIN
src/assets/dev-tools.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

BIN
src/assets/ed25519.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 128 KiB

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 74 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 291 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB