Compare commits

...

435 Commits

Author SHA1 Message Date
chase.fil
c33784f485 Merge branch 'main' into dependabot/github_actions/actions/checkout-4 2023-09-27 12:29:13 -06:00
dame.eth
29017966f3 Merge pull request #623 from ipfs/damedoteth-patch-2
Update 2023-09-amino-refactoring.md
2023-09-27 13:42:40 -04:00
dame.eth
a9f12a1553 Update 2023-09-amino-refactoring.md 2023-09-27 13:19:33 -04:00
dame.eth
2ca03944d3 Merge pull request #622 from ipfs/damedoteth-patch-2
Update 2023-09-amino-refactoring.md
2023-09-27 13:09:34 -04:00
dame.eth
6b6166f9c9 Update 2023-09-amino-refactoring.md 2023-09-27 13:03:39 -04:00
dame.eth
131866db3b Merge pull request #619 from ipfs/dht-amino-refactoring
Add Blogpost: DHT Refactoring work
2023-09-27 11:52:02 -04:00
dame.eth
dce37a0cc6 Update 2023-09-amino-refactoring.md 2023-09-27 11:40:10 -04:00
chase.fil
c57ff65ecb Merge branch 'main' into dependabot/github_actions/actions/checkout-4 2023-09-26 13:38:32 -06:00
Yiannis Psaras
215327bef2 adding office hours luma link 2023-09-26 19:25:33 +03:00
Yiannis Psaras
2ed5554199 Update src/_blog/2023-09-amino-refactoring.md
Co-authored-by: Daniel Norman <1992255+2color@users.noreply.github.com>
2023-09-26 19:22:25 +03:00
Yiannis Psaras
e25889dbcc Update src/_blog/2023-09-amino-refactoring.md
Co-authored-by: Daniel Norman <1992255+2color@users.noreply.github.com>
2023-09-26 19:21:33 +03:00
Yiannis Psaras
eb7ec42ae3 Update src/_blog/2023-09-amino-refactoring.md
Co-authored-by: Daniel Norman <1992255+2color@users.noreply.github.com>
2023-09-26 19:20:55 +03:00
Yiannis Psaras
3e7d4d54c0 changes title and date 2023-09-26 09:14:45 +03:00
dame.eth
1dec1d9cba Merge branch 'main' into dht-amino-refactoring 2023-09-25 11:59:27 -04:00
dame.eth
9d1f82886b Update 2023-09-amino-refactoring.md 2023-09-25 11:45:53 -04:00
dame.eth
0395a8ddfb Merge pull request #620 from ipfs/damedoteth-patch-2
Create 2023-brave-infobar.md
2023-09-25 11:43:25 -04:00
dame.eth
abf5be27c5 Update 2023-brave-infobar.md 2023-09-25 11:35:29 -04:00
dame.eth
0163b9023a Update 2023-brave-infobar.md 2023-09-25 11:27:56 -04:00
github-actions[bot]
395b5ed2fc Optimised images with calibre/image-actions 2023-09-25 15:27:27 +00:00
dame.eth
4e57cff9c3 Update 2023-brave-infobar.md 2023-09-25 11:22:08 -04:00
dame.eth
10dbd7c187 Update src/_blog/2023-brave-infobar.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-09-25 11:21:17 -04:00
dame.eth
055788c32b Add files via upload 2023-09-25 11:20:09 -04:00
dame.eth
c68a88f115 Update 2023-09-amino-refactoring.md 2023-09-22 13:38:01 -04:00
dame.eth
8510202c1b Update 2023-09-amino-refactoring.md 2023-09-22 13:11:34 -04:00
dame.eth
17dbf40f05 Update 2023-09-amino-refactoring.md 2023-09-22 13:10:31 -04:00
dame.eth
17c71abcd1 Update 2023-brave-infobar.md 2023-09-22 12:50:05 -04:00
github-actions[bot]
8e2ad9b668 Optimised images with calibre/image-actions 2023-09-22 16:38:02 +00:00
dame.eth
d6de915758 Update 2023-brave-infobar.md 2023-09-22 12:33:41 -04:00
dame.eth
cb9d91a150 Add files via upload 2023-09-22 12:28:58 -04:00
dame.eth
c630b720b0 Create 2023-brave-infobar.md 2023-09-22 12:22:55 -04:00
Yiannis Psaras
a3eea803b3 fixes $leq$ syntax 2023-09-22 09:30:56 +03:00
Yiannis Psaras
d7ad16c5ae date update 2023-09-22 08:59:57 +03:00
Yiannis Psaras
bb1fe98a9c fixes table layout 2023-09-22 08:44:02 +03:00
Yiannis Psaras
467e3d1461 fixes broken link 2023-09-22 08:34:04 +03:00
github-actions[bot]
0f3c478234 Optimised images with calibre/image-actions 2023-09-21 19:13:57 +00:00
Yiannis Psaras
593379f76c header image 2023-09-21 22:08:43 +03:00
Yiannis Psaras
e316d7ddc7 DHT Refactoring work 2023-09-21 22:06:27 +03:00
dame.eth
20ce65a486 Merge pull request #618 from ipfs/damedoteth-patch-2
Rename 2024-ipfs-connect-istanbul.md to 2023-ipfs-connect-istanbul.md
2023-09-21 09:53:24 -04:00
dame.eth
79a2aaee9d Rename 2024-ipfs-connect-istanbul.md to 2023-ipfs-connect-istanbul.md 2023-09-21 09:48:45 -04:00
dame.eth
4c9fabf109 Merge pull request #616 from ipfs/damedoteth-patch-2
Create 2024-ipfs-connect-istanbul.md
2023-09-21 09:33:29 -04:00
dame.eth
2458aa9bda Update 2024-ipfs-connect-istanbul.md 2023-09-20 16:18:13 -04:00
dame.eth
d577e241ed Add files via upload 2023-09-20 16:14:54 -04:00
dame.eth
b55bebd2f6 Update 2024-ipfs-connect-istanbul.md 2023-09-20 16:01:59 -04:00
dame.eth
5d3d7dfbe7 Create 2024-ipfs-connect-istanbul.md 2023-09-20 15:47:07 -04:00
Henrique Dias
12ec4c4ff7 feat: make menu structure more consistent (#614)
* feat: remove 'Install', update 'About' links

* feat: add community and developers menu

* feat: remove team and help links
2023-09-19 12:42:30 +01:00
Chris Waring
baa3918a5e Merge branch 'main' into dependabot/github_actions/actions/checkout-4 2023-09-18 17:13:23 +01:00
dame.eth
35d5f72d35 Merge pull request #613 from ipfs/damedoteth-patch-2
Create ipfs-events-2024-survey.md
2023-09-18 12:12:09 -04:00
Chris Waring
0226e6508c Merge branch 'main' into dependabot/github_actions/actions/checkout-4 2023-09-18 17:08:32 +01:00
Chris Waring
b7ca433fcf fix: frontmatter formatting 2023-09-18 16:57:46 +01:00
dame.eth
57a4ad7585 Update ipfs-events-2024-survey.md 2023-09-18 11:44:02 -04:00
dame.eth
da9a487a4c Update ipfs-events-2024-survey.md 2023-09-18 11:39:07 -04:00
dame.eth
f9039db3be Update ipfs-events-2024-survey.md 2023-09-18 11:33:35 -04:00
dame.eth
33864df00f Update ipfs-events-2024-survey.md 2023-09-18 11:28:48 -04:00
dame.eth
11f201a4eb Update ipfs-events-2024-survey.md 2023-09-18 11:16:45 -04:00
dame.eth
aa15363a6b Create ipfs-events-2024-survey.md 2023-09-18 10:16:04 -04:00
Daniel Norman
83bc6e7a5c fix: plausible script (#612)
Co-authored-by: Daniel N <2color@users.noreply.github.com>
2023-09-14 15:29:03 +01:00
dame.eth
7527fb7093 Merge pull request #611 from ipfs/damedoteth-patch-2
Create newsletter-197.md
2023-09-12 12:58:17 -04:00
dame.eth
975b8d129d Update newsletter-197.md 2023-09-12 03:07:29 -04:00
dame.eth
fb004d1a01 Create newsletter-197.md 2023-09-12 02:59:42 -04:00
dependabot[bot]
9ab33ca6d3 Bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-11 12:59:45 +00:00
dame.eth
b329884f6d Merge pull request #609 from ipfs/damedoteth-patch-2
Update 2023-introducing-the-ecosystem-working-group.md
2023-09-05 12:18:29 -04:00
dame.eth
7a0e96d12c Update 2023-introducing-the-ecosystem-working-group.md 2023-09-05 12:12:31 -04:00
dame.eth
fa75e458a9 Merge pull request #608 from ipfs/damedoteth-patch-1
Update 2023-introducing-the-ecosystem-working-group.md
2023-09-05 11:48:30 -04:00
dame.eth
a9223eeac5 Update 2023-introducing-the-ecosystem-working-group.md 2023-09-05 11:43:08 -04:00
dame.eth
d4bf6c1280 Update 2023-introducing-the-ecosystem-working-group.md 2023-09-05 11:41:51 -04:00
dame.eth
598b9fe5cd Merge pull request #607 from ipfs/damedoteth-patch-1
Create 2023-introducing-the-ecosystem-working-group.md
2023-09-05 11:40:02 -04:00
dame.eth
e47a45bab0 Update 2023-introducing-the-ecosystem-working-group.md 2023-09-05 11:34:03 -04:00
dame.eth
20e84d4edd Update 2023-introducing-the-ecosystem-working-group.md 2023-09-05 10:53:26 -04:00
dame.eth
11ef7f8090 Update 2023-introducing-the-ecosystem-working-group.md 2023-09-05 05:57:24 -04:00
dame.eth
364b96b084 Create 2023-introducing-the-ecosystem-working-group.md 2023-09-05 05:24:20 -04:00
Jorropo
d6f92032ad Merge pull request #606 from ipfs/kubo-v0.22.0
Update Kubo: v0.22.0
2023-08-14 14:44:25 +02:00
Jorropo
fc438ac9c8 chore: add Kubo release note 2023-08-14 11:21:07 +00:00
dame.eth
d4087da24d Merge pull request #605 from ipfs/damedoteth-patch-1
Create newsletter-196.md
2023-08-09 11:13:41 -04:00
dame.eth
962b63c128 Update newsletter-196.md 2023-08-09 10:14:00 -04:00
dame.eth
8fc721143e Update newsletter-196.md 2023-08-09 10:11:05 -04:00
dame.eth
e96df58758 Update newsletter-196.md 2023-08-09 10:01:39 -04:00
dame.eth
1c351bf928 Update newsletter-196.md 2023-08-09 09:54:51 -04:00
dame.eth
3ab94579e4 Update newsletter-196.md 2023-08-09 09:42:46 -04:00
github-actions[bot]
940a6094ae Optimised images with calibre/image-actions 2023-08-08 21:23:34 +00:00
dame.eth
d94e2a806f Update newsletter-196.md 2023-08-08 17:19:13 -04:00
dame.eth
2026c4c13c Add files via upload 2023-08-08 17:15:49 -04:00
dame.eth
00bdf917ac Update newsletter-196.md 2023-08-08 16:48:23 -04:00
dame.eth
480c5fa8f9 Update newsletter-196.md 2023-08-08 09:46:24 -04:00
dame.eth
42c6639535 Create newsletter-196.md 2023-08-08 06:41:18 -04:00
dame.eth
c3cddd4235 Merge pull request #604 from dennis-tra/2023-08-an-observatory-for-the-ipfs-network
Add probelab.io launch blog post - An Observatory for the IPFS Network
2023-08-03 10:03:07 -04:00
dame.eth
35bd9c577d Update 2023-08-an-observatory-for-the-ipfs-network.md 2023-08-03 09:57:38 -04:00
dame.eth
b3531ca012 Update 2023-08-an-observatory-for-the-ipfs-network.md
Attempting to fix a small formatting error
2023-08-03 09:47:25 -04:00
dame.eth
604f35514e Update 2023-08-an-observatory-for-the-ipfs-network.md
Minor syntax changes
2023-08-03 09:40:12 -04:00
Dennis Trautwein
50e1341252 Add probelab.io header image 2023-08-03 11:31:59 +02:00
Dennis Trautwein
18a1fcb564 Add probelab.io launch blog post 2023-08-03 11:15:29 +02:00
Dennis Trautwein
dfc143258f Update README 2023-08-03 11:15:15 +02:00
dame.eth
793218485d Merge pull request #602 from ipfs/damedoteth-patch-1
Update 2023-07-rust-libp2p-based-ipfs-bootstrap-node.md
2023-07-26 15:40:27 -04:00
dame.eth
6afffc8bb8 Update 2023-07-rust-libp2p-based-ipfs-bootstrap-node.md 2023-07-26 15:30:49 -04:00
dame.eth
3d06cd2209 Merge pull request #601 from mxinden/rust-libp2p-bootstrap
Add rust-libp2p based bootstrap node post
2023-07-25 14:39:27 -04:00
Max Inden
337868c27c Fix "in case" repetition 2023-07-25 15:09:31 +02:00
Max Inden
42e5984afb Ue probelab.io 2023-07-25 15:02:33 +02:00
Max Inden
7621239ae8 Remove reference to private github issue 2023-07-25 15:02:31 +02:00
Max Inden
06236ad3db Rephrase connection percentage 2023-07-25 15:02:30 +02:00
Max Inden
999a9fca5b Remove reference to private github issue 2023-07-25 15:02:28 +02:00
Max Inden
057741536e Change heading to IPFS Public DHT Bootstrap Nodes 2023-07-25 15:02:25 +02:00
Max Inden
a99e9286aa Apply suggestions from code review
Co-authored-by: Steve Loeppky <stvn@loeppky.com>
2023-07-25 15:02:11 +02:00
Max Inden
eadb3a08ac Fix enumeration indentation in rendered view 2023-07-24 10:52:26 +02:00
Max Inden
f74eb6d44a Update src/_blog/2023-07-rust-libp2p-based-ipfs-bootstrap-node.md 2023-07-24 10:45:05 +02:00
Max Inden
1de72140f0 Apply suggestions from code review
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-07-24 10:44:24 +02:00
Max Inden
e19b149834 Update src/_blog/2023-07-rust-libp2p-based-ipfs-bootstrap-node.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-07-24 10:36:33 +02:00
Max Inden
c2179714e9 Update src/_blog/2023-07-rust-libp2p-based-ipfs-bootstrap-node.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-07-24 10:36:26 +02:00
Max Inden
1094adfb65 Update src/_blog/2023-07-rust-libp2p-based-ipfs-bootstrap-node.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-07-24 10:36:12 +02:00
Max Inden
30994f8fc9 Update src/_blog/2023-07-rust-libp2p-based-ipfs-bootstrap-node.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-07-24 10:35:52 +02:00
Max Inden
4c9fadfae5 Update src/_blog/2023-07-rust-libp2p-based-ipfs-bootstrap-node.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-07-24 10:35:30 +02:00
Max Inden
d8849053e1 Update src/_blog/2023-07-rust-libp2p-based-ipfs-bootstrap-node.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-07-24 10:34:59 +02:00
Max Inden
ce49b67529 Update src/_blog/2023-07-rust-libp2p-based-ipfs-bootstrap-node.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-07-24 10:34:40 +02:00
Max Inden
bf158924cd Update src/_blog/2023-07-rust-libp2p-based-ipfs-bootstrap-node.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-07-24 10:34:17 +02:00
Max Inden
d2ac1a6f2f Update src/_blog/2023-07-rust-libp2p-based-ipfs-bootstrap-node.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-07-24 10:33:59 +02:00
Max Inden
a7652dc8d0 Fix typo 2023-07-16 19:00:55 +09:00
Max Inden
856a8216a0 Add graphs 2023-07-16 14:47:54 +09:00
Max Inden
a2179f47cc Restructure in action section 2023-07-16 14:05:40 +09:00
Max Inden
e9beaec67b Expand rust-libp2p-server section 2023-07-16 13:54:52 +09:00
Max Inden
ef3093ed35 Write out motivation section 2023-07-16 13:41:49 +09:00
Max Inden
0f62e33bb4 Short dig output 2023-07-16 13:30:35 +09:00
Max Inden
ec8cda7a76 Expand on DNS resolution 2023-07-14 16:33:36 +09:00
Max Inden
4c674602a6 Update graph and add sub headings 2023-07-14 16:06:02 +09:00
Max Inden
2b3a9da379 Refine motivation 2023-07-14 15:15:49 +09:00
Max Inden
b089e5c132 Fix permalink 2023-07-14 14:22:30 +09:00
Max Inden
8e596fc3ec Add rust-libp2p based bootstrap node post 2023-07-14 12:20:45 +09:00
dame.eth
4b79eb9e0d Merge pull request #600 from ipfs/damedoteth-patch-1
Create newsletter-195.md
2023-07-06 10:14:10 -04:00
dame.eth
1dbe3edee5 Update newsletter-195.md 2023-07-06 10:09:16 -04:00
dame.eth
8caca4cd84 Update newsletter-195.md 2023-07-06 04:23:11 -04:00
dame.eth
100f4e6385 Update newsletter-195.md 2023-07-05 19:22:47 -04:00
dame.eth
64c76857a2 Update newsletter-195.md 2023-07-05 13:45:02 -04:00
dame.eth
65dabefbd8 Update newsletter-195.md 2023-07-05 13:44:19 -04:00
dame.eth
cd2c83f194 Update newsletter-195.md 2023-07-05 12:43:09 -04:00
dame.eth
0d9a5476d8 Create newsletter-195.md 2023-07-05 11:59:17 -04:00
Henrique Dias
d49fae6d43 Update Kubo: v0.21.0 (#599) 2023-07-03 13:19:58 +02:00
dame.eth
640229ca80 Merge pull request #598 from ipfs/damedoteth-patch-1
WIP: Update README.md to remove forestry info, add more tips/info for…
2023-06-28 12:09:37 -04:00
dame.eth
4544108604 Merge branch 'main' into damedoteth-patch-1 2023-06-28 11:45:17 -04:00
dame.eth
4dc23511b6 Update README.md 2023-06-28 11:40:39 -04:00
GitHub
2d1e4f16ae chore: Update .github/workflows/stale.yml [skip ci] 2023-06-28 08:51:03 +00:00
dame.eth
8f1e9178f2 WIP: Update README.md to remove forestry info, add more tips/info for PR publishing 2023-06-26 13:06:12 -04:00
dame.eth
3b6d1d8514 Merge pull request #597 from ipfs/damedoteth-patch-2
Update 2023-ipfs-thing-recap-content-routing.md
2023-06-20 15:19:00 -04:00
dame.eth
356b8a49e4 Merge branch 'main' into damedoteth-patch-2 2023-06-20 14:19:08 -04:00
dame.eth
5fb9793779 Merge pull request #596 from ipfs/damedoteth-patch-1 2023-06-20 14:18:55 -04:00
dame.eth
81b77fb32c Update 2023-ipfs-thing-recap-content-routing.md 2023-06-20 13:36:52 -04:00
dame.eth
bd2cc58a22 Update 2023-thing-web-track.md 2023-06-20 13:34:11 -04:00
GitHub
4f8d2dc6ba chore: Update .github/dependabot.yml [skip ci] 2023-06-19 12:28:27 +00:00
Piotr Galar
0401937ba2 Merge pull request #595 from ipfs/dependabot/github_actions/actions/checkout-3
Bump actions/checkout from 2 to 3
2023-06-15 17:16:34 +02:00
dependabot[bot]
194286f747 Bump actions/checkout from 2 to 3
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-14 13:08:48 +00:00
ipfs-mgmt-read-write[bot]
8525192fc6 chore: reduce dependabot frequency 2023-06-14 13:05:49 +00:00
GitHub
866c338fdf chore: Update .github/workflows/stale.yml [skip ci] 2023-06-14 11:15:22 +00:00
dame.eth
2cea3fdf74 Merge pull request #594 from ipfs/damedoteth-patch-1
Update ipfs-newsletter-194.md
2023-06-06 14:50:54 -04:00
dame.eth
28464dd6a2 Update ipfs-newsletter-194.md 2023-06-06 14:45:36 -04:00
dame.eth
b3793d0166 Update ipfs-newsletter-194.md 2023-06-06 14:45:04 -04:00
dame.eth
0cdca0b936 Merge pull request #593 from ipfs/damedoteth-patch-1
Create ipfs-newsletter-194.md
2023-06-06 14:33:10 -04:00
dame.eth
1c52ece2ce Update ipfs-newsletter-194.md 2023-06-06 13:50:57 -04:00
dame.eth
e50ca9483d Create ipfs-newsletter-194.md 2023-06-06 13:07:41 -04:00
dame.eth
1f43bcd5a8 Merge pull request #592 from ipfs/damedoteth-patch-1
Final changes to multi-client Chromium blog post
2023-06-01 04:47:41 -04:00
dame.eth
e9bfb1280f Update 2023-05-multi-gateway-browser-client.md 2023-06-01 04:34:26 -04:00
github-actions[bot]
0e2ac03c36 Optimised images with calibre/image-actions 2023-06-01 08:33:32 +00:00
dame.eth
b163193b72 Merge branch 'main' into damedoteth-patch-1 2023-06-01 04:29:19 -04:00
dame.eth
e0c35a0760 Merge pull request #560 from John-LittleBearLabs/ipfs-chromium-post
Adding a blog post about multi-gateway IPFS client in Chromium.
2023-06-01 04:24:02 -04:00
dame.eth
c9de0c537b Add files via upload 2023-06-01 04:20:00 -04:00
dame.eth
b5bd24564b Merge branch 'main' into ipfs-chromium-post 2023-06-01 03:28:13 -04:00
dame.eth
63d58a0689 Merge pull request #584 from ipfs/damedoteth-patch-2
Create 2023-http-gateways-recap.md
2023-05-30 16:22:26 -04:00
dame.eth
119de7a23c Update 2023-http-gateways-recap.md 2023-05-30 16:14:44 -04:00
dame.eth
1bd66fc8f1 Merge branch 'main' into damedoteth-patch-2 2023-05-30 16:09:54 -04:00
John Turpish
13b7ce6739 Some changes inspired by Steve Loeppky's PR comments. 2023-05-30 07:43:50 -04:00
John Turpish
b630ff5efd Update src/_blog/2023-05-multi-gateway-browser-client.md
Co-authored-by: Steve Loeppky <stvn@loeppky.com>
2023-05-30 06:40:33 -04:00
John Turpish
02ce414503 Update src/_blog/2023-05-multi-gateway-browser-client.md
Co-authored-by: Steve Loeppky <stvn@loeppky.com>
2023-05-30 06:39:31 -04:00
John Turpish
2477f8e6e5 Update src/_blog/2023-05-multi-gateway-browser-client.md
Co-authored-by: Steve Loeppky <stvn@loeppky.com>
2023-05-30 06:39:19 -04:00
John Turpish
4904d52244 Update src/_blog/2023-05-multi-gateway-browser-client.md
Co-authored-by: Steve Loeppky <stvn@loeppky.com>
2023-05-30 06:39:02 -04:00
Steve Loeppky
037a350afb Merge pull request #586 from ipfs/fix/add-js-ipfs-deprecation-image
Added custom header to js-ipfs deprecation
2023-05-29 16:19:33 +02:00
github-actions[bot]
62fcfe9726 Optimised images with calibre/image-actions 2023-05-26 23:11:45 +00:00
Steve Loeppky
3d9fb38e39 Added custom header to js-ipfs deprecation
This a followup for https://github.com/ipfs/ipfs-blog/pull/585
2023-05-26 16:05:16 -07:00
dame.eth
6c98053d92 Merge branch 'main' into damedoteth-patch-2 2023-05-26 11:13:37 -04:00
dame.eth
dd7a96da44 Update 2023-http-gateways-recap.md 2023-05-26 10:46:46 -04:00
Steve Loeppky
20b2d4852b js-ipfs deprecation - replaced by Helia (#585)
This is a blog post accompanying the js-ipfs deprecation work
that is active at https://github.com/ipfs/js-ipfs/issues/4336
2023-05-26 12:35:42 +01:00
dame.eth
b4e96c779a Update 2023-http-gateways-recap.md 2023-05-24 12:40:04 -04:00
dame.eth
6a54ab4dbf Update 2023-http-gateways-recap.md 2023-05-24 12:22:40 -04:00
dame.eth
9535040f57 Add files via upload 2023-05-24 12:21:13 -04:00
dame.eth
62001948f2 Update 2023-05-multi-gateway-browser-client.md 2023-05-24 11:55:48 -04:00
John Turpish
cb50299b22 Out-of-band suggestion: link to releases/ not one particular release, so this blog can be more evergreen. 2023-05-24 11:22:10 -04:00
dame.eth
ca9b8339e8 Update 2023-http-gateways-recap.md 2023-05-24 10:27:59 -04:00
dame.eth
81f39b6a7a Merge branch 'main' into damedoteth-patch-2 2023-05-24 10:21:17 -04:00
dame.eth
95d33983ed Merge pull request #583 from ipfs/damedoteth-patch-1
Create 2023-ipfs-thing-community-governance.md
2023-05-24 10:19:42 -04:00
dame.eth
947d7d6a4f Create 2023-http-gateways-recap.md 2023-05-24 10:18:42 -04:00
dame.eth
b3b09c66e9 Update 2023-ipfs-thing-community-governance.md 2023-05-24 10:00:14 -04:00
dame.eth
0274612e8b Update 2023-ipfs-thing-community-governance.md 2023-05-24 09:46:59 -04:00
Chris Waring
37ddb7a324 add post 2023-05-23 22:57:03 +01:00
Chris Waring
4bf899b897 mv post 2023-05-23 22:55:48 +01:00
dame.eth
d264aa7187 Update 2023-ipfs-thing-community-governance.md 2023-05-23 17:36:30 -04:00
dame.eth
46773b3e74 Add files via upload 2023-05-23 17:36:02 -04:00
dame.eth
47729529c9 Update 2023-ipfs-thing-community-governance.md 2023-05-23 17:14:17 -04:00
dame.eth
da9d5c9494 Update 2023-ipfs-thing-community-governance.md 2023-05-23 17:04:59 -04:00
dame.eth
b16b8fcd02 Update 2023-ipfs-thing-community-governance.md 2023-05-23 16:44:26 -04:00
dame.eth
802801a2d6 Update 2023-ipfs-thing-community-governance.md 2023-05-23 15:45:57 -04:00
dame.eth
7dd6094ccf Merge branch 'main' into damedoteth-patch-1 2023-05-23 15:37:28 -04:00
dame.eth
beeb1b17d8 Merge pull request #534 from meandavejustice/edit-announcing-pin-tweet-to-ipfs-blogpost
Edit: Add webrecorder tool link to announcing pin tweet blogpost
2023-05-23 09:26:58 -04:00
dame.eth
01291ec23f Merge branch 'main' into edit-announcing-pin-tweet-to-ipfs-blogpost 2023-05-23 09:15:33 -04:00
dame.eth
ffd7c8bbe9 Update 2023-01-10-announcing-pin-tweet-to-ipfs.md 2023-05-23 09:15:25 -04:00
dame.eth
ddf9c23a1a Create 2023-ipfs-thing-community-governance.md 2023-05-23 08:53:09 -04:00
John Turpish
e5d65455e1 With the latest code PR we do actually enforce a limit. 2023-05-22 14:24:10 -04:00
John Turpish
854807950f Update src/_blog/2023-05-multi-gateway-browser-client.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-05-22 14:22:46 -04:00
John Turpish
3c6ba66c7e Update src/_blog/2023-05-multi-gateway-browser-client.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-05-22 14:21:38 -04:00
John Turpish
078dcc88a9 Update src/_blog/2023-05-multi-gateway-browser-client.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-05-22 14:21:03 -04:00
Marcin Rataj
25a0ff1a16 Apply suggestions from code review
These should be not controversial
2023-05-22 20:20:26 +02:00
dame.eth
5273a1283c Merge branch 'main' into ipfs-chromium-post 2023-05-22 14:15:05 -04:00
dame.eth
540464fe5e Merge pull request #582 from ipfs/damedoteth-patch-1
Update 2023-how-to-hose-dynamic-content-on-ipfs.md
2023-05-17 17:20:29 -04:00
github-actions[bot]
14a55b34c5 Optimised images with calibre/image-actions 2023-05-17 20:39:14 +00:00
dame.eth
6084ecc92e Update 2023-how-to-hose-dynamic-content-on-ipfs.md 2023-05-17 16:34:41 -04:00
dame.eth
dab5f074c9 Add files via upload 2023-05-17 16:33:07 -04:00
dame.eth
257637b5e2 Update 2023-how-to-hose-dynamic-content-on-ipfs.md 2023-05-17 16:29:29 -04:00
github-actions[bot]
eec170fe55 Optimised images with calibre/image-actions 2023-05-17 20:27:42 +00:00
dame.eth
825d79e428 Add files via upload 2023-05-17 16:22:46 -04:00
dame.eth
7d0e2638d8 Update 2023-how-to-hose-dynamic-content-on-ipfs.md 2023-05-17 16:21:46 -04:00
dame.eth
c651d7b8b5 Merge pull request #580 from ipfs/damedoteth-patch-1
Create 2023-hosting-dynamic-content-on-ipfs.md
2023-05-17 15:54:39 -04:00
dame.eth
6b892382a0 Update 2023-how-to-hose-dynamic-content-on-ipfs.md 2023-05-17 15:23:01 -04:00
dame.eth
3ef573d7e8 Merge branch 'main' into damedoteth-patch-1 2023-05-17 15:19:14 -04:00
dame.eth
075fa0d7a9 Update 2023-how-to-hose-dynamic-content-on-ipfs.md 2023-05-17 15:14:26 -04:00
github-actions[bot]
1b831e7918 Optimised images with calibre/image-actions 2023-05-17 18:51:43 +00:00
dame.eth
483779f914 Add files via upload 2023-05-17 14:47:29 -04:00
dame.eth
3b1addab47 Merge pull request #581 from ipfs/damedoteth-patch-2
Create ecosystemcontent.md
2023-05-16 15:34:43 -04:00
dame.eth
9e04fe0d1b Update ecosystemcontent.md 2023-05-16 14:09:23 -04:00
dame.eth
cbf821fbb4 Update Card.vue 2023-05-16 14:09:12 -04:00
dame.eth
19df0682b0 Update ecosystemcontent.md 2023-05-16 14:03:02 -04:00
dame.eth
08814f9d6e Update Card.vue 2023-05-16 13:52:58 -04:00
dame.eth
b3e7677238 Create ecosystemcontent.md 2023-05-16 13:41:56 -04:00
dame.eth
17d7ce5983 Update and rename 2023-hosting-dynamic-content-on-ipfs.md to 2023-how-to-hose-dynamic-content-on-ipfs.md 2023-05-16 13:31:47 -04:00
dame.eth
ed6a118c12 Update 2023-hosting-dynamic-content-on-ipfs.md 2023-05-16 13:26:43 -04:00
dame.eth
21554bef52 Create 2023-hosting-dynamic-content-on-ipfs.md 2023-05-16 12:21:28 -04:00
dame.eth
13137fca11 Merge pull request #568 from masih/masih/thing_2023_recap_content_routing
IPFS Thing 2023 recap of Content Routing track
2023-05-15 15:23:47 -04:00
dame.eth
595a708bec Update 2023-ipfs-thing-recap-content-routing.md 2023-05-15 15:14:37 -04:00
dame.eth
198cbe5688 Update 2023-ipfs-thing-recap-content-routing.md 2023-05-15 14:40:54 -04:00
dame.eth
3122652c62 Update 2023-ipfs-thing-recap-content-routing.md 2023-05-15 14:37:09 -04:00
dame.eth
01e5e9f7f2 Update 2023-ipfs-thing-recap-content-routing.md 2023-05-15 14:29:18 -04:00
dame.eth
18c7761d45 Update 2023-ipfs-thing-recap-content-routing.md 2023-05-15 14:20:35 -04:00
dame.eth
ea5cec846b Merge branch 'main' into masih/thing_2023_recap_content_routing 2023-05-15 11:23:11 -04:00
John Turpish
c3ae019e94 demo now includes some devtool stuff 2023-05-15 10:57:56 -04:00
John Turpish
aac58e361f multibase note 2023-05-15 10:24:50 -04:00
dame.eth
11af0ee637 Merge pull request #510 from meandavejustice/feat/add-durin-announcement 2023-05-11 10:39:55 -04:00
John Turpish
c4c77ad3f5 Fixed YT embed URL 2023-05-11 00:34:50 -04:00
John Turpish
ae98c20f39 Merge branch 'main' into ipfs-chromium-post 2023-05-11 00:06:08 -04:00
John Turpish
a08026b85d Switch to iframe 2023-05-11 00:02:58 -04:00
John Turpish
21009fe453 Update src/_blog/2023-05-multi-gateway-browser-client.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-05-10 23:46:09 -04:00
John Turpish
0d6ab56b55 Update src/_blog/2023-05-multi-gateway-browser-client.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-05-10 23:39:51 -04:00
John Turpish
bbf21d37e5 Update src/_blog/2023-05-multi-gateway-browser-client.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-05-10 23:39:00 -04:00
John Turpish
c3974ec39e Update src/_blog/2023-05-multi-gateway-browser-client.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-05-10 23:38:44 -04:00
dame.eth
d8e1d9f61d Merge branch 'main' into feat/add-durin-announcement 2023-05-10 21:08:05 -04:00
dame.eth
72d13bf888 Update src/_blog/2023-01-26-announcing-durin.md
Co-authored-by: Marcin Rataj <lidel@lidel.org>
2023-05-10 21:07:53 -04:00
Marcin Rataj
be9a9d494f chore: apply cosmetic suggestions from code review
applying these should make review of remaining ones easier
2023-05-10 21:57:00 +02:00
dame.eth
2effc6f205 Merge pull request #577 from ipfs/autonome-patch-1 2023-05-10 15:14:38 -04:00
Dietrich Ayala
996f21f073 Update 2023-thing-web-track.md
fix Peergos video id
2023-05-10 14:59:58 -04:00
dame.eth
fb32c56d64 Update 2023-01-26-announcing-durin.md 2023-05-10 12:04:14 -04:00
dame.eth
f3c1baa214 Update 2023-01-26-announcing-durin.md 2023-05-10 11:57:52 -04:00
dame.eth
270cd440c5 Update 2023-01-26-announcing-durin.md 2023-05-10 11:49:43 -04:00
dame.eth
f3a424908d Update 2023-01-26-announcing-durin.md 2023-05-10 11:28:58 -04:00
dame.eth
62083912bc Merge branch 'main' into feat/add-durin-announcement 2023-05-10 11:28:30 -04:00
dame.eth
a9204f9d26 Update 2023-01-26-announcing-durin.md 2023-05-10 11:28:21 -04:00
dame.eth
8956e414a7 Merge pull request #567 from autonome/webtrack
add ipfs thing web track recap post
2023-05-10 08:14:37 -04:00
dame.eth
d0bdc365f7 Merge branch 'main' into webtrack 2023-05-10 08:11:05 -04:00
dame.eth
7007e5c0aa Update 2023-thing-web-track.md 2023-05-10 08:10:48 -04:00
dame.eth
5b570d1633 Merge pull request #575 from ipfs/newsletter-193-update
Update welcome-to-ipfs-news-193.md
2023-05-09 15:14:08 -04:00
dame.eth
fc0de4cb9a Update welcome-to-ipfs-news-193.md 2023-05-09 15:08:06 -04:00
dame.eth
647ceb76f6 Merge pull request #574 from ipfs/damedoteth-patch-1
Newsletter 193
2023-05-09 13:33:22 -04:00
dame.eth
3104138446 Newsletter 193 2023-05-09 09:41:53 -04:00
cw
2390e976b3 Merge branch 'main' into webtrack 2023-05-09 13:59:01 +01:00
Henrique Dias
0b410c28de Merge pull request #573 from ipfs/kubo-v0.20.0
Update Kubo: v0.20.0
2023-05-09 14:56:34 +02:00
Henrique Dias
c460a0805f chore: add Kubo release note 2023-05-09 12:50:54 +00:00
dame.eth
fd75c5d5b7 Merge branch 'main' into webtrack 2023-05-08 17:22:17 -04:00
dame.eth
a130738e19 Merge pull request #570 from ipfs/update-date
Update 2023-05-ipfs-unresponsive-nodes-incident.md
2023-05-08 17:22:09 -04:00
dame.eth
718d19cea3 Merge branch 'main' into update-date 2023-05-08 16:57:25 -04:00
dame.eth
a7b8b72c64 Merge branch 'main' into webtrack 2023-05-08 16:57:18 -04:00
dame.eth
f71201d66b Update 2023-thing-web-track.md 2023-05-08 16:57:11 -04:00
dame.eth
9f6a643b90 Merge pull request #572 from ipfs/damedoteth-patch-1
Add files via upload
2023-05-08 16:56:25 -04:00
dame.eth
38a5af69cc Merge branch 'main' into update-date 2023-05-08 16:53:49 -04:00
dame.eth
2bfee0bab9 Add files via upload 2023-05-08 16:51:52 -04:00
dame.eth
5ef3f3a099 Updated header image format 2023-05-08 16:45:07 -04:00
dame.eth
d07b57e7b2 Merge branch 'main' into masih/thing_2023_recap_content_routing 2023-05-08 16:43:24 -04:00
dame.eth
a99e7c121d Reformatted header image 2023-05-08 16:42:22 -04:00
dame.eth
92ccd3a68a Merge pull request #571 from ipfs/damedoteth-patch-1
Add image for thing track recap post
2023-05-08 16:41:36 -04:00
dame.eth
2728479b29 Add image for thing track recap post 2023-05-08 16:35:25 -04:00
dame.eth
f291efa5e1 Update 2023-ipfs-thing-recap-content-routing.md 2023-05-08 16:34:58 -04:00
dame.eth
1ff0b389b8 Reformatting YouTube embeds 2023-05-08 16:27:39 -04:00
dame.eth
808d1d7013 Merge branch 'main' into update-date 2023-05-08 16:20:03 -04:00
dame.eth
42d8b59624 Update 2023-05-ipfs-unresponsive-nodes-incident.md 2023-05-08 16:19:47 -04:00
dame.eth
d479a1d232 Merge pull request #565 from ipfs/ipfs-unresponsive-nodes-incident
Unresponsive nodes incident blogpost
2023-05-08 16:18:17 -04:00
dame.eth
ae5733f3fb Reformatted YouTube links for proper rendering 2023-05-08 16:15:06 -04:00
dame.eth
bdc4b13610 Merge branch 'main' into webtrack 2023-05-08 16:03:28 -04:00
dame.eth
059d2e5948 Merge branch 'main' into ipfs-unresponsive-nodes-incident 2023-05-08 16:02:13 -04:00
dame.eth
56100f3bfb Merge branch 'main' into masih/thing_2023_recap_content_routing 2023-05-08 16:02:03 -04:00
dame.eth
7616a301cd Merge pull request #569 from ipfs/news-coverage-update-2023
Update newscoverage.md
2023-05-08 16:01:27 -04:00
dame.eth
dc34871f80 Fourth try... 2023-05-08 14:31:15 -04:00
dame.eth
d816906631 Third attempt... 2023-05-08 14:15:35 -04:00
dame.eth
4a9cc61dde Second attempt at fixing spacing between indented paragraphs 2023-05-08 13:53:51 -04:00
dame.eth
7ba03e948f Attempting to fix lack of spacing between some indented paragraphs 2023-05-08 13:30:18 -04:00
Masih H. Derkani
42febcd147 IPFS Thing 2023 recap of Content Routing track
Write a recap of Content Routing track at IPFS Thing 2023 in form of a
blog post with links to relevant talks and resources.
2023-05-08 17:54:05 +01:00
dame.eth
a339d91cf4 Update newscoverage.md
Adding brave announcement
2023-05-08 12:06:30 -04:00
Yiannis Psaras
760625c442 minor edits 2023-05-08 19:02:18 +03:00
dame.eth
d2f5787b07 Update 2023-01-26-announcing-durin.md
Changed the header image and tweaked the blog title
2023-05-08 11:59:41 -04:00
John Turpish
16c3dba5ea Finish header rename 2023-05-08 11:23:45 -04:00
John Turpish
fb7e4d768e Forgot to remove the header from hackmd 2023-05-08 11:13:05 -04:00
John Turpish
38ac3eb861 Updates based upon https://hackmd.io/6tx3_OJdQ1Wtn9w4jCG2ag 2023-05-08 10:58:10 -04:00
dame.eth
7756cb2d24 Update 2023-05-ipfs-unresponsive-nodes-incident.md
Made some relatively minor changes for syntax, flow, and clarity. Also tweaked the title of the post due to length (it was overflowing in the preview), and adjusted some of the headers.
2023-05-08 10:39:07 -04:00
John Turpish
c06f1b5566 Adding a blog post about multi-gateway IPFS client in Chromium. 2023-05-08 10:29:31 -04:00
Steve Loeppky
f6989a2f77 Update 2023-05-ipfs-unresponsive-nodes-incident.md
Small typoe changes and moved emojis to beginning of headings.
2023-05-07 21:22:00 -07:00
Yiannis Psaras
b154073461 adding ipfs thing video 2023-05-07 23:39:26 +03:00
Yiannis Psaras
b4b9f2de51 fixing margins and formatting 2023-05-07 21:49:28 +03:00
Yiannis Psaras
ff280f8649 fixing img paths 2023-05-07 21:27:09 +03:00
Yiannis Psaras
55ed6ccb1e Update src/_blog/2023-05-ipfs-unresponsive-nodes-incident.md
Co-authored-by: Steve Loeppky <biglep@protocol.ai>
2023-05-07 21:16:10 +03:00
Yiannis Psaras
82479d1bc3 Update src/_blog/2023-05-ipfs-unresponsive-nodes-incident.md
Co-authored-by: Steve Loeppky <biglep@protocol.ai>
2023-05-07 21:15:57 +03:00
Yiannis Psaras
77123ed881 Update src/_blog/2023-05-ipfs-unresponsive-nodes-incident.md
Co-authored-by: Steve Loeppky <biglep@protocol.ai>
2023-05-07 21:15:42 +03:00
Yiannis Psaras
67bee86e2c Update src/_blog/2023-05-ipfs-unresponsive-nodes-incident.md
Co-authored-by: Steve Loeppky <biglep@protocol.ai>
2023-05-07 21:14:55 +03:00
Yiannis Psaras
4d605994a0 Update src/_blog/2023-05-ipfs-unresponsive-nodes-incident.md
Co-authored-by: Steve Loeppky <biglep@protocol.ai>
2023-05-07 21:14:44 +03:00
Yiannis Psaras
a59bd61a86 Update src/_blog/2023-05-ipfs-unresponsive-nodes-incident.md
Co-authored-by: Steve Loeppky <biglep@protocol.ai>
2023-05-07 21:14:30 +03:00
Yiannis Psaras
099b6e1a44 Update src/_blog/2023-05-ipfs-unresponsive-nodes-incident.md
Co-authored-by: Steve Loeppky <biglep@protocol.ai>
2023-05-07 21:13:25 +03:00
Yiannis Psaras
128e9bce31 Update src/_blog/2023-05-ipfs-unresponsive-nodes-incident.md
Co-authored-by: Steve Loeppky <biglep@protocol.ai>
2023-05-07 21:13:08 +03:00
dietrich ayala
07052b247b add ipfs ting web track recap post 2023-05-06 18:19:16 +02:00
Steve Loeppky
b2874c0b81 Merge branch 'main' into ipfs-unresponsive-nodes-incident 2023-05-06 07:04:42 +02:00
dame.eth
3a9e12c368 Merge branch 'main' into feat/add-durin-announcement 2023-05-05 16:22:46 -04:00
dame.eth
05c720ed3d Merge pull request #566 from ipfs/damedoteth-patch-1
Add image for Durin blog post
2023-05-05 16:22:14 -04:00
github-actions[bot]
2ab38512b7 Optimised images with calibre/image-actions 2023-05-05 20:12:09 +00:00
cw
2190fb0520 Update src/_blog/2023-05-ipfs-unresponsive-nodes-incident.md 2023-05-05 21:09:32 +01:00
dame.eth
e76dfef294 Add image for Durin blog post 2023-05-05 16:07:14 -04:00
Yiannis Psaras
386d2fe6f2 fixing relative paths 2023-05-05 22:48:11 +03:00
Yiannis Psaras
978741cb56 adding back author 2023-05-05 22:40:11 +03:00
Yiannis Psaras
d9fd598eaf fixing file names again 2023-05-05 22:29:16 +03:00
Yiannis Psaras
075485f490 Merge branch 'ipfs-unresponsive-nodes-incident' of https://github.com/ipfs/ipfs-blog into ipfs-unresponsive-nodes-incident 2023-05-05 21:07:44 +03:00
Yiannis Psaras
be566f2e73 updating image paths 2023-05-05 21:07:41 +03:00
Yiannis Psaras
03f92e8593 changing folder name to match post name 2023-05-05 21:04:50 +03:00
github-actions[bot]
3e0305d595 Optimised images with calibre/image-actions 2023-05-05 18:04:05 +00:00
Yiannis Psaras
6441b3cc01 changing header image title 2023-05-05 20:59:11 +03:00
Yiannis Psaras
bcfbdafa30 changing assets folder name to match branch name 2023-05-05 20:50:59 +03:00
Yiannis Psaras
1b9b03c1ae changing permalink to match the branch name 2023-05-05 20:47:37 +03:00
Yiannis Psaras
ebd492b8dd excluding header image from body 2023-05-05 20:42:26 +03:00
Yiannis Psaras
727e1b5aa9 Update unresponsive-nodes-incident-202305.md 2023-05-05 20:37:35 +03:00
Yiannis Psaras
9bd7ac0034 excluding author 2023-05-05 20:34:37 +03:00
Yiannis Psaras
dc3fd935ff fixing header typo 2023-05-05 20:28:57 +03:00
Yiannis Psaras
a080cdd50d adding unresponsive nodes incident text 2023-05-05 20:13:53 +03:00
dame.eth
e4e2e7fa73 Update 2023-01-26-announcing-durin.md
Changing the iOS app store link
2023-05-05 09:02:47 -04:00
dame.eth
9ae7a5bfa2 Merge branch 'main' into feat/add-durin-announcement 2023-05-05 08:51:47 -04:00
David Justice
2a811f4720 Final copy updates 2023-05-04 16:46:26 -04:00
dame.eth
4c32825969 Merge pull request #564 from ipfs/momack2-patch-1 2023-05-04 12:57:31 -04:00
MollyM
f87add34ed Update 2023-ipfs-thing-recap.md
small spelling / formatting nits
2023-05-04 09:43:09 -07:00
dame.eth
5c41ea3123 Merge pull request #563 from ipfs/damedoteth-patch-1
Create 2023-ipfs-thing-recap.md
2023-05-04 08:09:53 -04:00
dame.eth
90d0530c76 Update 2023-ipfs-thing-recap.md 2023-05-04 08:01:21 -04:00
dame.eth
b3c87ecb9b Update 2023-ipfs-thing-recap.md 2023-05-04 07:58:15 -04:00
dame.eth
917f2f381f Update 2023-ipfs-thing-recap.md 2023-05-04 07:47:11 -04:00
dame.eth
a07798067b Update 2023-ipfs-thing-recap.md 2023-05-04 07:42:12 -04:00
dame.eth
f3bb2de2d8 Update 2023-ipfs-thing-recap.md 2023-05-03 14:18:53 -04:00
dame.eth
c9629616d7 Update 2023-ipfs-thing-recap.md 2023-05-03 14:12:51 -04:00
David Justice
36861672ea optimize gif 2023-05-03 14:10:54 -04:00
David Justice
d0102204e9 add padding around images 2023-05-03 14:08:30 -04:00
David Justice
25c15fd23a remove width attribute from images 2023-05-03 14:05:46 -04:00
dame.eth
35ccedec98 Update 2023-ipfs-thing-recap.md 2023-05-03 14:04:53 -04:00
dame.eth
9a73e8b61c Add files via upload 2023-05-03 14:04:17 -04:00
David Justice
762e7f09e3 add ipfs thing talk and widen images 2023-05-03 13:57:32 -04:00
dame.eth
ad547a13fe Add files via upload 2023-05-03 13:46:22 -04:00
dame.eth
c75c8d31e9 Update 2023-ipfs-thing-recap.md 2023-05-03 13:38:50 -04:00
dame.eth
c42f6fd09c Add files via upload 2023-05-03 13:38:36 -04:00
dame.eth
988f07f7cc Delete featured image.jpg 2023-05-03 13:37:56 -04:00
dame.eth
516e6fb841 Add files via upload 2023-05-03 13:36:42 -04:00
dame.eth
22bbb008a2 Delete danny-juan-collage.jpg 2023-05-03 13:36:15 -04:00
dame.eth
4f0821b376 Add files via upload 2023-05-03 13:34:25 -04:00
dame.eth
c23758faa7 Delete danny-juan-collage.jpg 2023-05-03 13:33:23 -04:00
dame.eth
4c52d9d0bf Add files via upload 2023-05-03 13:32:31 -04:00
dame.eth
d2c84a6448 Delete danny-juan.jpg 2023-05-03 13:32:16 -04:00
dame.eth
b4b13d2320 Add files via upload 2023-05-03 13:30:15 -04:00
dame.eth
82c9bcc0a3 Add files via upload 2023-05-03 13:27:15 -04:00
dame.eth
b32c3311a7 Add images for blog post 2023-05-03 13:22:33 -04:00
dame.eth
b9a3f88af2 Create placeholder.md 2023-05-03 13:19:18 -04:00
David Justice
506bab5bc2 Further edits to Durin blog post 2023-05-03 12:41:20 -04:00
David Justice
91e93ff617 task: optimize gif for blog release 2023-05-03 12:41:19 -04:00
David Justice
08a94e4e3f Update gateway image and set width on blog images 2023-05-03 12:41:19 -04:00
David Justice
e343c554f5 Add blog: Announcing Durin 2023-05-03 12:41:10 -04:00
dame.eth
5b68372330 Create 2023-ipfs-thing-recap.md
Need to add images still
2023-05-03 09:21:49 -04:00
Henrique Dias
e76e614f99 Merge pull request #562 from ipfs/kubo-v0.19.2
Update Kubo: v0.19.2
2023-05-03 12:16:33 +02:00
Henrique Dias
f6be10e522 chore: add Kubo release note 2023-05-03 10:11:51 +00:00
dame.eth
40276399cb Merge pull request #561 from cewood/content-blocking-ipfs-stack
Add 2023-content-blocking-for-the-ipfs-stack.md
2023-04-26 15:20:02 -04:00
dame.eth
6d61839db6 Update 2023-content-blocking-for-the-ipfs-stack.md 2023-04-26 14:21:09 -04:00
Mosh
9bc5d2638e Update 2023-content-blocking-for-the-ipfs-stack.md 2023-04-26 13:58:49 -04:00
dame.eth
ada48ed07e Update 2023-content-blocking-for-the-ipfs-stack.md
moved date up to today
2023-04-26 13:57:57 -04:00
dame.eth
f0e5608a34 Update 2023-content-blocking-for-the-ipfs-stack.md 2023-04-26 10:14:43 -04:00
Cameron Wood
3ceec6233f Suggested changes 2023-04-25 16:22:50 +02:00
Cameron Wood
869d0eeed3 Add 2023-content-blocking-for-the-ipfs-stack.md 2023-04-24 14:21:17 +02:00
dame.eth
fc8e133368 Merge pull request #559 from ipfs/damedoteth-patch-1
Update 2023-ipfs-on-bluesky.md
2023-04-17 09:26:21 -04:00
dame.eth
2b3432b5f4 Update 2023-ipfs-on-bluesky.md 2023-04-17 09:18:20 -04:00
dame.eth
8ade16e914 Merge pull request #558 from ipfs/damedoteth-patch-2
Create 2023-ipfs-on-bluesky.md
2023-04-17 09:17:48 -04:00
github-actions[bot]
ad2ca6bee9 Optimised images with calibre/image-actions 2023-04-13 13:38:45 +00:00
dame.eth
11eed64fbd Merge branch 'main' into damedoteth-patch-2 2023-04-13 09:34:36 -04:00
dame.eth
65d28941a5 Update 2023-ipfs-on-bluesky.md 2023-04-13 09:34:07 -04:00
dame.eth
8f8328c313 Merge pull request #556 from ipfs/damedoteth-patch-1
Update 2023-introducing-lassie.md
2023-04-13 09:19:19 -04:00
dame.eth
05b82bb9a6 Update 2023-ipfs-on-bluesky.md 2023-04-12 15:19:25 -04:00
dame.eth
2e811ca386 Add files via upload 2023-04-12 15:18:36 -04:00
dame.eth
dc64ebbb94 Create 2023-ipfs-on-bluesky.md 2023-04-12 15:12:25 -04:00
dame.eth
daf4841824 Update 2023-introducing-lassie.md 2023-04-06 14:07:21 -04:00
dame.eth
6d9840100e Merge pull request #555 from ipfs/damedoteth-patch-1
Create 2023-introducing-lassie.md
2023-04-06 14:05:44 -04:00
dame.eth
17c48aa2ff Update 2023-introducing-lassie.md 2023-04-06 14:01:07 -04:00
github-actions[bot]
9c1a2c9dee Optimised images with calibre/image-actions 2023-04-06 13:58:52 +00:00
dame.eth
e3bcc1ccbf Update 2023-introducing-lassie.md 2023-04-06 09:54:17 -04:00
dame.eth
cefb788dd5 Add files via upload
For Lassie blog post
2023-04-06 09:53:46 -04:00
dame.eth
a19b9f01de Update 2023-introducing-lassie.md 2023-04-06 09:47:17 -04:00
dame.eth
96bd34bdb8 Create 2023-introducing-lassie.md 2023-04-06 09:41:01 -04:00
dame.eth
31ab11fa8c Merge pull request #552 from ipfs/fix-twitter-link
fix link to twitter
2023-04-06 09:18:10 -04:00
dame.eth
c83d396433 Merge branch 'main' into fix-twitter-link 2023-04-06 08:56:31 -04:00
Piotr Galar
51ca3fd187 Merge pull request #554 from ipfs/kubo-v0.19.1
Update Kubo: v0.19.1
2023-04-05 22:24:04 +02:00
galargh
3c8a59b809 chore: add Kubo release note 2023-04-05 20:18:42 +00:00
dame.eth
9b4d4a789e Merge branch 'main' into fix-twitter-link 2023-04-05 13:51:22 -04:00
Marcin Rataj
1e998df69a Merge pull request #550 from darobin/principles
Blog post on implementations and principles
2023-03-31 21:17:38 +02:00
Robin Berjon
5a3ac54025 some link fixes 2023-03-31 14:58:33 -04:00
Marcin Rataj
ff5c338379 chore: update links, add bifrost-gateway 2023-03-31 20:35:07 +02:00
Daniel Norman
f8b355c8ce Update 2023-3-29-ipfs-thing-content-tracks.md 2023-03-31 17:24:58 +02:00
Robin Berjon
c94dbdaafe set date back 2023-03-30 13:31:52 -04:00
Robin Berjon
470902d979 I like to do it in style 2023-03-30 13:31:27 -04:00
Robin Berjon
5a659016da life is more fun with non-standard MD breaks 2023-03-30 12:58:44 -04:00
Robin Berjon
04bf79b52a add a toc 2023-03-30 11:42:25 -04:00
Robin Berjon
b9cd27506d Update src/_blog/2023-03-implementations-principles.md
Co-authored-by: Steve Loeppky <stvn@loeppky.com>
2023-03-30 11:34:40 -04:00
Robin Berjon
7973d23dab Update src/_blog/2023-03-implementations-principles.md
Co-authored-by: Steve Loeppky <stvn@loeppky.com>
2023-03-30 11:34:19 -04:00
Robin Berjon
b2b0baeff1 quote the things 2023-03-29 15:07:41 -04:00
Robin Berjon
a8d50fee21 blog post on implementations and principles 2023-03-29 15:03:12 -04:00
Daniel Norman
1ba02094fe Fixes #540 (#549)
Co-authored-by: dame.eth <110121581+damedoteth@users.noreply.github.com>
2023-03-29 19:09:18 +02:00
dame.eth
57aa8c30f0 Merge pull request #547 from ipfs/damedoteth-patch-1
Create 2023-3-29-ipfs-thing-content-tracks
2023-03-29 12:55:53 -04:00
dame.eth
9d65e95ed9 Update 2023-3-29-ipfs-thing-content-tracks.md 2023-03-29 12:40:22 -04:00
dame.eth
ed9576ce04 Update src/_blog/2023-3-29-ipfs-thing-content-tracks.md
Co-authored-by: Mosh <1306020+mishmosh@users.noreply.github.com>
2023-03-29 12:27:50 -04:00
dame.eth
d383ea1d5d Update src/_blog/2023-3-29-ipfs-thing-content-tracks.md
Co-authored-by: Mosh <1306020+mishmosh@users.noreply.github.com>
2023-03-29 12:27:22 -04:00
dame.eth
9707b97516 Update 2023-3-29-ipfs-thing-content-tracks.md 2023-03-29 09:54:51 -04:00
dame.eth
c9904fb9a9 Update 2023-3-29-ipfs-thing-content-tracks.md 2023-03-28 16:21:31 -04:00
dame.eth
4b823852b8 Update 2023-3-29-ipfs-thing-content-tracks.md 2023-03-28 16:11:18 -04:00
dame.eth
3fc9204749 Rename 2023-3-29-ipfs-thing-content-tracks to 2023-3-29-ipfs-thing-content-tracks.md 2023-03-28 15:53:18 -04:00
dame.eth
0c30f57edd Update 2023-3-29-ipfs-thing-content-tracks 2023-03-28 15:48:06 -04:00
dame.eth
4048891c68 Update 2023-3-29-ipfs-thing-content-tracks 2023-03-28 15:12:05 -04:00
dame.eth
8264ff8273 Add files via upload 2023-03-28 15:10:38 -04:00
dame.eth
97a985f4e5 Delete ipfs-things-23-og.jpg 2023-03-28 15:09:40 -04:00
dame.eth
236b14557a Add files via upload 2023-03-28 15:06:10 -04:00
dame.eth
94db0def6f Create 2023-3-29-ipfs-thing-content-tracks 2023-03-28 15:01:47 -04:00
dame.eth
fd545ba73a Update welcome-to-ipfs-news-192.md (#545) 2023-03-27 17:05:43 -04:00
dame.eth
75f05e8721 Newsletter 192 (#543)
* Update from Forestry.io
dame.eth created src/_blog/welcome-to-ipfs-news-192.md

* Update from Forestry.io
dame.eth updated src/_blog/welcome-to-ipfs-news-192.md

* Update from Forestry.io
dame.eth updated src/_blog/welcome-to-ipfs-news-192.md

* Update from Forestry.io
dame.eth updated src/_blog/welcome-to-ipfs-news-192.md

* Update from Forestry.io
dame.eth updated src/_blog/welcome-to-ipfs-news-192.md

* Update from Forestry.io
dame.eth updated src/_blog/welcome-to-ipfs-news-192.md
2023-03-27 15:56:28 -04:00
David Justice
3bc0f0e6f0 Edit: Add webrecorder tool link to announcing pin tweet blogpost 2023-02-01 15:59:47 -05:00
111 changed files with 2744 additions and 96 deletions

View File

@@ -42,8 +42,8 @@ fields:
name: permalink
label: Permalink
description: 'URL for this post. Must start and end with slashes. <br>For blog posts,
include the date: <em>/2022-09-23-descriptive-title/</em><br>For weekly newsletters,
use the edition number: <em>/weekly-123/</em>'
include the date: <em>/2022-09-23-descriptive-title/</em><br>For newsletters,
use the edition number: <em>/newsletter-123/</em>'
config:
required: true
- type: text
@@ -154,6 +154,7 @@ pages:
- src/_blog/2021-05-31-distributed-wikipedia-mirror-update.md
- src/_blog/2022-12-07-testground-in-2022.md
- src/_blog/2023-01-10-announcing-pin-tweet-to-ipfs.md
- src/_blog/2023-01-26-announcing-durin.md
- src/_blog/3s-studio-bringing-unreal-engine-to-ipfs.md
- src/_blog/a-brave-new-wallet-the-future-of-the-browser-wallet.md
- src/_blog/a-guide-to-ipfs-connectivity-in-web-browsers.md
@@ -257,6 +258,7 @@ pages:
- src/_blog/welcome-to-ipfs-news-189.md
- src/_blog/welcome-to-ipfs-news-190.md
- src/_blog/welcome-to-ipfs-news-191.md
- src/_blog/welcome-to-ipfs-news-192.md
- src/_blog/welcome-to-ipfs-weekly-119.md
- src/_blog/welcome-to-ipfs-weekly-120.md
- src/_blog/welcome-to-ipfs-weekly-121.md

View File

@@ -14,4 +14,8 @@ updates:
assignees:
- 'zebateira'
labels:
- 'dependencies'
- 'dependencies'
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Repo
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Compress Images
uses: calibreapp/image-actions@main

View File

@@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Install dependencies
run: npm ci
- name: Check for scheduled posts

View File

@@ -2,25 +2,12 @@ name: Close and mark stale issue
on:
schedule:
- cron: '0 0 * * *'
- cron: '0 0 * * *'
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: 'Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days.'
close-issue-message: 'This issue was closed because it is missing author input.'
stale-issue-label: 'kind/stale'
any-of-labels: 'need/author-input'
exempt-issue-labels: 'need/triage,need/community-input,need/maintainer-input,need/maintainers-input,need/analysis,status/blocked,status/in-progress,status/ready,status/deferred,status/inactive'
days-before-issue-stale: 6
days-before-issue-close: 7
enable-statistics: true
uses: pl-strflt/.github/.github/workflows/reusable-stale-issue.yml@v0.3

View File

@@ -27,7 +27,7 @@ jobs:
steps:
- name: Checkout repo
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Sync target branch

View File

@@ -12,46 +12,6 @@ This repository contains code and content for the [IPFS Blog & News](https://blo
**If you just want to submit a link (event, academic paper, tutorial, video or news coverage) to add to the site, [use this easy form](https://airtable.com/shrNH8YWole1xc70I)!**
## For post authors/editors
There are 2 ways to create a new blog post:
- Via the [Forestry](https://forestry.io) editor
- Via a [manual pull request](#creating-a-new-blog-post-via-github-pull-request)
### Creating a new blog post using Forestry
Forestry is a content management system (CMS) that automatically creates and manages Github PRs for each new post. Using Forestry offers you WYSIWYG editing (in addition to raw markdown mode), easy image upload/crop tools, and instant previews. If you're a regular contributor to the IPFS blog and would like to request Forestry access, contact Emily Vaughan.
Forestry uses the `staging` branch as a work-in-progress scratchpad for blog content. Once content in `staging` is approved, it can be merged into `main`, which is the branch that feeds the production site at blog.ipfs.tech. Merges into `main` are _automatically deployed_ to the production site using [Fleek](https://fleek.co/).
### Forestry authoring/editing tips
- Use the "Content Types" section of Forestry's left-hand menu to drill down to the type of item (blog post, video, news coverage, event) you want to create/edit.
- For card and blog post header images, **be sure to use the [image crop/scale tool](https://blog.ipfs.tech/image-crop/)** to resize and save images so they're the correct dimensions. (Don't have an image? Don't worry, there are generic fallback images.)
- Want to embed a YouTube video in a blog post? Switch to raw markdown view and use `@[youtube](videoID)`, substituting the video's unique ID from the URL (e.g. `https://www.youtube.com/watch?v=eFbKKsEoQNg`) for `videoID`.
- To switch between WYSIWYG and raw markdown while writing a blog post, choose "Raw Editor" or "WYSIWYG Editor" from the dots menu at the top right of the page:<br/>![image](https://user-images.githubusercontent.com/1507828/110036257-fbe93e00-7cf9-11eb-935c-a70f9d21c14f.png)
### Forestry build preview tips
While WYSIWYG mode usually gives you a good enough idea of what a blog post will look like, you can also load Forestry's own _build preview_ in a new tab by clicking the eye icon at the top right of the page:<br/>![image](https://user-images.githubusercontent.com/1507828/110036918-f4766480-7cfa-11eb-9cf3-a0082e61a7a0.png)
This build preview lets you preview changes to any content type (not just blog posts), and _does not_ require you to save your changes in order to see them.
A few tips:
- Click the eye icon to _regenerate_ a build preview at any time from a Forestry edit page. You may need to reload the build preview tab if you don't see changes come through immediately.
- Occasionally, a build preview page gets stuck at a URL ending in `forestry/pending` or simply won't load. In this case, try the following:
- Remove `forestry/pending` from the URL and try again.
- Check the Previews section of Forestry's [`Site > Settings` page](https://app.forestry.io/sites/lg5t7mxcqbr-da/#/settings/previews) to see the preview server's current status, start/stop/restart the server, or examine the logs for errors. Simply restarting the preview server can fix many problems.
- If all else fails, save your changes, wait a few minutes, and take a look at [Fleek's build of the latest version of the `staging` branch](https://ipfs-blog-staging.on.fleek.co/). It's a considerably slower build/deploy time, but does reflect the latest changes once it finishes deploying.
### To deploy to the live site
Changes you _save_ in Forestry are written directly to the `staging` branch and automatically generate a staging preview at https://ipfs-blog-staging.on.fleek.co/.
**Once a staged post is ready to go live, please PR `staging` to `main` using [this handy shortcut](https://github.com/ipfs/ipfs-blog/compare/main...staging?expand=1).** Give your PR a title explaining what changes are inside (the default just says "Staging", which isn't helpful.) _Note that if multiple posts are in-flight in staging and only one is approved to go live, your PR may need some massaging by a reviewer._
_Note for PR reviewers: While we continue to dogfood Forestry, please leave your edits in comments rather than making additional commits._ As our overall workflow continues to solidify, this direction may change.
### Creating a new blog post via Github pull request
Each blog post is a markdown file in the [`src/_blog`](./src/_blog) folder, with a little metadata at the top (known as YAML frontmatter) to help us create the post index page.
@@ -98,7 +58,7 @@ Now edit the metadata at the top of the file.
Each post can have a custom image that is shown on the [blog homepage](https://blog.ipfs.tech/). To set an image:
1. Add the image into `static\header_images`. Typically the image is `2048×1152px` in jpg/png.
1. Add the image into `assets\header_images`. Typically the image is `2048×1152px` in jpg/png.
1. Rename the image to match the file name of your post. For example, the `2022-12-community-calendar.md` post uses `2022-12-community-calendar.png` as the header.
1. In the post markdown, edit the front-matter to include the `header_image` variable:
@@ -110,7 +70,7 @@ Each post can have a custom image that is shown on the [blog homepage](https://b
To create a pull request, you will need to fork this repository. See the GitHub docs on [how to create a pull request from a fork](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork). If you have the [GitHub CLI](https://cli.github.com/) installed, you can use the [`gh pr create` command](https://cli.github.com/manual/gh_pr_create) from the terminal to conveniently create a pull request.
Once you create the pull request, await review.
Once you create the pull request, await review. If you have permissions to merge, always preview the post first to ensure everything looks right. You can do this by clicking on the "Details" link next to the **fleek/build** check that runs automatically. Clicking this link will take you to a staging site where you will then need to click on the intended post in the feed to see it.
### To add a URL redirect for a blog post
@@ -135,7 +95,7 @@ To build a local copy, run the following:
1. Move into the `ipfs-blog` folder and install the npm dependencies:
```bash
cd ipfs-docs
cd ipfs-blog
npm install
```

12
package-lock.json generated
View File

@@ -6364,9 +6364,9 @@
}
},
"node_modules/caniuse-lite": {
"version": "1.0.30001376",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001376.tgz",
"integrity": "sha512-I27WhtOQ3X3v3it9gNs/oTpoE5KpwmqKR5oKPA8M0G7uMXh9Ty81Q904HpKUrM30ei7zfcL5jE7AXefgbOfMig==",
"version": "1.0.30001470",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001470.tgz",
"integrity": "sha512-065uNwY6QtHCBOExzbV6m236DDhYCCtPmQUCoQtwkVqzud8v5QPidoMr6CoMkC2nfp6nksjttqWQRRh75LqUmA==",
"dev": true,
"funding": [
{
@@ -30067,9 +30067,9 @@
}
},
"caniuse-lite": {
"version": "1.0.30001376",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001376.tgz",
"integrity": "sha512-I27WhtOQ3X3v3it9gNs/oTpoE5KpwmqKR5oKPA8M0G7uMXh9Ty81Q904HpKUrM30ei7zfcL5jE7AXefgbOfMig==",
"version": "1.0.30001470",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001470.tgz",
"integrity": "sha512-065uNwY6QtHCBOExzbV6m236DDhYCCtPmQUCoQtwkVqzud8v5QPidoMr6CoMkC2nfp6nksjttqWQRRh75LqUmA==",
"dev": true
},
"caseless": {

View File

@@ -42,20 +42,18 @@ const themeConfigDefaults = {
],
footerLegal: '',
headerLinks: [
{ text: 'About', link: 'https://ipfs.tech/#why' },
{ text: 'Install', link: 'https://ipfs.tech/#install' },
{ text: 'About', link: 'https://ipfs.tech/' },
{ text: 'Community', link: 'https://ipfs.tech/community/' },
{ text: 'Developers', link: 'https://ipfs.tech/developers/' },
{ text: 'Docs', link: 'https://docs.ipfs.tech/' },
{ text: 'Team', link: 'https://ipfs.tech/team' },
{ text: 'Blog', link: '/' },
{ text: 'Help', link: 'https://ipfs.tech/help' },
],
mobileNavLinks: [
{ text: 'About', link: 'https://ipfs.tech/#why' },
{ text: 'Install', link: 'https://ipfs.tech/#install' },
{ text: 'About', link: 'https://ipfs.tech/' },
{ text: 'Community', link: 'https://ipfs.tech/community/' },
{ text: 'Developers', link: 'https://ipfs.tech/developers/' },
{ text: 'Docs', link: 'https://docs.ipfs.tech/' },
{ text: 'Team', link: 'https://ipfs.tech/team' },
{ text: 'Blog', link: '/' },
{ text: 'Help', link: 'https://ipfs.tech/help' },
],
}
@@ -112,20 +110,18 @@ module.exports = {
},
],
headerLinks: [
{ text: 'About', link: 'https://ipfs.tech/#why' },
{ text: 'Install', link: 'https://ipfs.tech/#install' },
{ text: 'About', link: 'https://ipfs.tech/' },
{ text: 'Community', link: 'https://ipfs.tech/community/' },
{ text: 'Developers', link: 'https://ipfs.tech/developers/' },
{ text: 'Docs', link: 'https://docs.ipfs.tech/' },
{ text: 'Team', link: 'https://ipfs.tech/team' },
{ text: 'Blog', link: '/zh-cn' },
{ text: 'Help', link: 'https://ipfs.tech/help' },
],
mobileNavLinks: [
{ text: 'About', link: 'https://ipfs.tech/#why' },
{ text: 'Install', link: 'https://ipfs.tech/#install' },
{ text: 'About', link: 'https://ipfs.tech/' },
{ text: 'Community', link: 'https://ipfs.tech/community/' },
{ text: 'Developers', link: 'https://ipfs.tech/developers/' },
{ text: 'Docs', link: 'https://docs.ipfs.tech/' },
{ text: 'Team', link: 'https://ipfs.tech/team' },
{ text: 'Blog', link: '/zh-cn/' },
{ text: 'Help', link: 'https://ipfs.tech/help' },
],
},
},

View File

@@ -35,8 +35,7 @@ module.exports = [
{
defer: true,
'data-domain': 'blog.ipfs.tech',
'data-api': 'https://proxy.daas.workers.dev/api/event',
src: 'https://proxy.daas.workers.dev/js/script.js',
src: 'https://plausible.io/js/plausible.js',
},
],
].concat(favicons)

View File

@@ -39,6 +39,7 @@ export default {
case 'News coverage':
case 'Release notes':
case 'Tutorial':
case 'Ecosystem content':
case 'Video':
return LinkCard

View File

@@ -26,7 +26,7 @@
class="text-blueGreen hover:underline"
href="#newsletter-form"
@click="blockLazyLoad()"
>weekly newsletter</a
>newsletter</a
>{{ `, ` }}
<a
class="text-blueGreen hover:underline"

View File

@@ -6,7 +6,7 @@
<div class="flex-shrink lg:max-w-sm xl:max-w-xl mb-4 lg:mb-0">
<h2 class="type-h2">Stay informed</h2>
<p class="mt-2 mr-2">
Sign up for the IPFS Weekly newsletter (<router-link
Sign up for the IPFS newsletter (<router-link
:to="latestWeeklyPost ? latestWeeklyPost.path : ''"
class="text-blueGreenLight hover:underline"
>example</router-link

View File

@@ -30,7 +30,7 @@ If we take a look at some recent studies, this centralized content becomes harde
<iframe width="560" height="315" src="https://www.youtube.com/embed/P6q3lHFPN5o" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
### Under the hood:
We are using tools from the [WebRecorder](https://webrecorder.net/) team to create verifiable [WebArChiveZip](https://specs.webrecorder.net/wacz/1.1.1/) files of tweets. We then assist the user in uploading these "WACZ" files to the IPFS network via [web3.storage](https://web3.storage). Here users can store all of their archived tweets in one place, and easily access them via their own IPFS node or other pinning services.
We are using a new tool, ["save tweet now"](https://webrecorder.github.io/save-tweet-now/), from the [WebRecorder](https://webrecorder.net/) team to create verifiable [WebArChiveZip](https://specs.webrecorder.net/wacz/1.1.1/) files of tweets. We then assist the user in uploading these "WACZ" files to the IPFS network via [web3.storage](https://web3.storage). Here users can store all of their archived tweets in one place, and easily access them via their own IPFS node or other pinning services.
### Where is it available?
@@ -46,4 +46,4 @@ Pin Tweet to IPFS is currently available in the [Chrome web store](https://chrom
### What's next?
We're continuing to iterate on *Pin Tweet to IPFS* to make archiving faster and add more verification capabilities. Take a look at our [issue tracker](https://github.com/meandavejustice/pin-tweet-to-ipfs/issues) to stay up to date on upcoming changes, and [submit your feedback](https://github.com/meandavejustice/pin-tweet-to-ipfs/issues/new).
We're continuing to iterate on *Pin Tweet to IPFS* to make archiving faster and add more verification capabilities. Take a look at our [issue tracker](https://github.com/meandavejustice/pin-tweet-to-ipfs/issues) to stay up to date on upcoming changes, and [submit your feedback](https://github.com/meandavejustice/pin-tweet-to-ipfs/issues/new).

View File

@@ -0,0 +1,82 @@
---
title: "Announcing Durin: a New Mobile App for the IPFS Network"
description: "Durin is a native mobile application for iOS and Android that lets you read and share content on the IPFS network"
date: 2023-05-11
permalink: "/announcing-durin/"
header_image: '/durin-featured-image.png'
author: David Justice
tags:
- Durin
- mobile
- ios
- android
- app store
- web3 storage
- web3
---
Today we are excited to announce **Durin**, a native mobile application for [iOS](https://apps.apple.com/us/app/durin/id1613391995) and [Android](https://play.google.com/store/apps/details?id=ai.protocol.durin) built to give users a new way to read and share with IPFS. It also serves as a sandbox for the Browsers & Platforms team to experiment with IPFS in a mobile environment.
## Background
To date, it's been difficult to access, upload, and share IPFS content using a mobile device. This is for a number of reasons, one of which is that [Kubo](https://github.com/ipfs/kubo)(the initial implementation of the protocol) was simply not built with mobile in mind. The IPFS approach to P2P for many years was about running servers, but [that is changing](https://blog.ipfs.tech/2023-03-implementation-principles/). In the meantime, we wanted to provide a quick and easy way for users to access basic IPFS features on mobile and set up a testing ground for future explorations.
## Accessing IPFS Content
The transport-agnostic nature of IPFS content addresses means there are many ways to find and retrieve content on the IPFS public network. On a mobile device, the best balance of decentralization and device performance is to align with the network model of the device OS - transient connectivity.
We do this in Durin by connecting to the IPFS network via multiple HTTP gateways. On app launch, Durin pings a list of public gateways, and determines which route is the most reliable and fastest way to access the network. This approach is functional but not optimal. We're working on specifications for multi-gateway connectivity patterns which balance a number of factors - such as verifiability guarantees, reader privacy, and not overloading gateways.
<br>
<img src="../assets/announcing-durin-ipfs/gateway-durin.png" alt="gateway list">
IPFS addresses are not natively supported in most web browsers or any mobile operating systems today. Durin registers as an `ipfs` scheme handler so that addresses are handled when encountered in applications and on the web.
On iOS Safari `ipfs://` protocol links will be redirected to Durin, where the app will translate and redirect the user to the fastest public gateway, making the content available on mobile. Unfortunately the auto-redirects do not work using Chrome's android app. They have not yet [implemented `registerProtocolHandler`](https://bugs.chromium.org/p/chromium/issues/detail?id=178097&q=protocol%20handler%20mobile&can=2)).
<br>
<img src="../assets/announcing-durin-ipfs/durin-redirect.gif" alt="redirect functionality on mobile safari">
## Sharing to IPFS from Mobile
Mobile devices are transiently connected and low-powered, so they do not make good servers. For sharing files and data to IPFS, Durin uses a [pinning service](https://docs.ipfs.tech/concepts/persistence/#persistence-permanence-and-pinning) to do this on behalf of the user.
We currently rely on [web3.storage](https://web3.storage/) for file uploads. `web3.storage` is a service that makes decentralized file storage accessible by hosting data on IPFS for the user, the way a web host does for HTTP today. NOTE: _Using a single service like this is not ideal, as users dont hold those keys. We plan to experiment with approaches to ensuring maximal user ownership of their data while also providing remote storage and data availability._
Durin also saves a local history of uploads already shared.
<br>
<img src="../assets/announcing-durin-ipfs/filelist-durin.png" alt="uploaded files list">
Using a single remote service is a usable first step, but doesn't provide long term user control of the data published. We're looking at tighter integration with local OS data storage, local sharing between devices when possible, and pluggable remote service support.
## Install Durin
Durin is available now for mobile phones in the iOS app store and Google Play store.
<br />
<a href="https://apps.apple.com/us/app/durin/id1613391995" class="cta-button"> Get Durin in iOS App Store </a>
<br />
<a href="https://play.google.com/store/apps/details?id=ai.protocol.durin" class="cta-button"> Get Durin in Google Play Store</a>
## The Future
Durin is an experiment in learning how to expose and integrate IPFS features into mobile operating systems in ways which align optimally with those environments. We're trying out variety of ideas from contacts integration, photo sync & backup, filecoin storage, peer to peer bluetooth connectivity.
We'd love to hear your ideas and feedback, and have you participate!
* [ipfs-shipyard/durin on Github](https://github.com/ipfs-shipyard/durin)
* [HackMd project document](https://hackmd.io/XtxGZoxqQ46X1GO7srrhMQ)
* [Feedback link](https://github.com/ipfs-shipyard/durin/issues)
Join the #browsers-and-platforms channel which is bridged across the [Filecoin Slack](https://filecoin.io/slack/), [IPFS Discord](https://discord.gg/vZTcrFePpt) and [Element/Matrix](https://matrix.to/#/#browsers-and-standards:ipfs.io).
Checkout the IPFS Thing talk, discussing Durin's role and some future ideas for the app.
<iframe width="560" height="315" src="https://www.youtube.com/embed/QkhnKm-fCs4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
## Shoutout
Shout out to [Trigram](https://www.trigram.co/) for continued work on Durin.

View File

@@ -0,0 +1,180 @@
---
title: 'IPFS Implementations: Its Definitely A Thing'
description: 'IPFS implementations vary wildly in order to adapt to as many situations as possible, and more keep being created. To bring clarity to the ecosystem, we look at some principles that make IPFS what it is.'
author: Robin Berjon
date: 2023-03-31
permalink: '/2023-03-implementation-principles/'
header_image: '/2023-03-implementations-flower.jpg'
tags:
- 'community'
- 'ipfs thing'
- 'event'
---
<style>
.type-rich li + li {
margin-top:0em;
}
.type-rich h3 {
font-weight: bold;
}
.type-rich h4 {
font-weight: bold;
}
.type-rich li > ul {
padding-top:0;
}
table {
background: #fff;
}
thead {
background: #34797d;
color: #fff;
}
td, th {
text-align: left;
padding: 0.25em 0.5em;
vertical-align: top;
}
tbody tr:nth-of-type(even) {
background: rgba(245, 246, 247, var(--tw-bg-opacity));
}
tbody tr td:first-of-type {
white-space: nowrap;
}
</style>
If all you had ever seen were roses, daffodils, and violets, you would probably have a simplistic intuition of what a flower is. But as you discovered more examples of the bountiful world of flowering plants, including some more unusual varieties like the [Hydnora Africana](https://en.wikipedia.org/wiki/Hydnora_africana) sci-fi monsters, the absolutely massive [Corpse Flower](https://en.wikipedia.org/wiki/Amorphophallus_titanum), or perhaps more cutely the [Swaddled Babies](https://gardenofeaden.blogspot.com/2020/03/the-swaddled-babies-orchid-anguloa.html) or the laughing [Bee Orchids](https://www.thehallofeinar.com/2017/06/bee-orchids-with-no-bees-to-love-them/) then your idea of flower would have to grow, until at some point you might start wondering if you really know what counts as a flower.
In July 2022, a core group of architects, implementers, and committed builders in the IPFS community met in Reykjavik, Iceland for [IPFS þing](https://blog.ipfs.tech/ipfs-ping-2022-recap/), the first-ever gathering focused on growing and diversifying implementations of the IPFS protocol.
The event kicked off with [a call for more and different implementations](https://www.youtube.com/watch?v=xCGjxdMuKF0&list=PLuhRWgmPaHtQhyXIhu2P6e-8WlYOf8wyH&index=6) so as to make IPFS as usable and accessible as possible in today's multifaceted software environment, able to operate in a wide variety of verticals such as gaming, of languages such as Python, or of architectural constraints such as lite nodes and satellite connectivity. And in the nine months since, that's exactly what has happened: it's springtime in the distributed hemisphere and [we are frolicking across fields of tantalizing IPFS flowers](https://docs.ipfs.tech/concepts/ipfs-implementations/). With so much efflorescence, it's worth taking a step back from this [thriving broader ecosystem](https://ecosystem.ipfs.tech/) and looking at the principles that make IPFS what it is.
### Table of Contents
- [What is IPFS?](#what-is-ipfs)
- [IPFS Implementations Today](#ipfs-implementations-today)
- [A Broader View](#a-broader-view)
- [IPFS Principles](#ipfs-principles)
- [Content Addressing](#content-addressing)
- [Robustly Transport-Agnostic](#robustly-transport-agnostic)
- [Clearer Foundations](#clearer-foundations)
- [See You Soon!](#see-you-soon)
- [Appendix: Implementations](#appendix-implementations)
## What is IPFS?
Quiz time! Which one of these is IPFS?
- ❓ Users linking to NFT assets over IPFS gateway URLs
- ❓ Sharing an image from your phone to Web3.storage
- ❓ Web publishing flow of static website from Github to Fleek
- ❓ Two people chatting over over a Bluetooth connection using IPFS CID addressed data
- ❓ Satellite beacon emitting IPFS CIDs of imagery it'll serve to an IPFS-connected ground station in that six minute window of (relatively) high bandwidth it gets a couple of times per day
- ❓ XR headset loading scenes of static content by IPFS CID, content shipped on the hardware by the OEM
- ❓ People reading Wikipedia from offline or censorship-resistant sources either due to poor connectivity or to [Internet restrictions](https://twitter.com/dietrich/status/1364978192075866115)
Answer: *All of the above!*
These ways of using IPFS are very different from one another — and that's a feature — but they all share two key characteristics:
1. Data is addressed by unique fingerprints generated from its contents
2. Which allows data use to be transport-agnostic.
This might not feel like a very thorough definition, but it already tells us a lot about what is or isn't an IPFS implementation. Let's look at the lay of the land today, and explore what being an implementation actually means.
## IPFS Implementations Today
IPFS implementations vary widely, from OS-level daemons living long and fulfilling lives in data centers, to JavaScript executing in the transient twinkle of a browser tab's eye. They have to exist in the multitude of environments where users access IPFS today, and where developers need to deploy the programs that provide that access. Many of these environments are unforgiving and may explicitly constrain available capabilities to align with the host's requirements or business model, such as mobile operating systems or IoT devices.
When developers have maximal control of an environment, they can implement IPFS to match the ideal of the vision articulated in the [original white paper](https://ipfs.io/ipfs/QmR7GSQM93Cx5eAg6a6yRzNde1FQv7uL6X1o4k7zrJa3LX/ipfs.draft3.pdf). When the deployment environment is very far from being able to achieve that ideal, or when the use case at hand is too different, implementing IPFS often means reliably getting content-addressed data in or out of that system by whatever means necessary.
The diversity this demands can be seen in our implementation ecosystem. For instance, we have implementations in [Go](https://github.com/ipfs/kubo), in [Java](https://github.com/Peergos/nabu), and in [JavaScript](https://github.com/ipfs/helia), as well as [one in Rust](https://github.com/n0-computer/iroh) that optimizes for extreme efficiency. We have some targeting [clusters](https://ipfscluster.io/) or [Filecoin](https://github.com/filecoin-project/lotus), meant to work in [mobile](https://github.com/ipfs-shipyard/gomobile-ipfs) or in other [embedded environments](https://github.com/ipfs-rust/ipfs-embed) as well as [for the cloud](https://github.com/elastic-ipfs/elastic-ipfs). And [the list keeps growing](https://docs.ipfs.tech/concepts/ipfs-implementations/).
## A Broader View
Today's IPFS ecosystem is larger than most people realize, and most of us only work with a subset of it. This makes it easy to develop a restrictive intuition of what IPFS is.
For instance, it can be tempting to reach the conclusion that supporting IPFS means being interoperable with [Kubo](https://github.com/ipfs/kubo) or supporting everything that Kubo does. Kubo is, of course, an outstanding implementation but there are excellent reasons to make different decisions if you're targeting different contexts or optimizing for different goals. This is notably true when considering Filecoin: making the data stored by Filecoin storage providers accessible to other IPFS nodes can't just mean connecting Lotus to Kubo.
Many successful protocols support implementations that only do one thing well, without exercising the entire protocol's capabilities and perhaps even without being fully compliant. For instance, you could write an HTTP server that listens on port 80, throws away any method, path, or header information you send it, and always responds with a code [`418`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/418), `Content-Type` set to `image/jpeg`, and [a classic work of art](https://ipfs.io/ipfs/bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi) in the body. It might not be a fully compliant implementation of HTTP, it's arguably not a very useful implementation of HTTP, but it's still an implementation of HTTP. And there are millions of HTTP servers that don't support everything in the HTTP suite of standards but that nevertheless provide services that are far more valuable than our little thought experiment. The important part is that they can be used to resolve `http` URLs with authority.
This is a very useful pattern that IPFS supports as well. To give a quick and very dirty example (since that's the point), this [crude 24 line script](https://gist.github.com/darobin/9c9984586dcb133f384d3fd05f3a0bb9) can expose a Git repository as an IPFS gateway simply by making all of its objects accessible via CIDs that prefix the SHA1 hashes with `f017c1114`. Such a script could be used, for instance, to integrate a git repository into an IPFS-based archival system. This is a far cry from being an elegant implementation, and bridging Git to IPFS warrants a cleaner approach, but the point remains that glueing systems into IPFS with a minimalistic approach is no less legitimate a deployment of IPFS than a Swiss Army Knife IPFS library.
We should also keep in mind that many systems across the IPFS network do peer discovery and content routing outside the public DHT. This includes gateways of course, but also mDNS discovery, Gossipsub peer exchange, pinning service clusters, or wholly separate DHTs. An inclusive — but principled — view of what IPFS includes makes the ecosystem richer and more valuable for all of us.
![Many ways of being in the IPFS Network](../assets/the-ipfs-network.jpg)
## IPFS Principles
There are many ways to implement and to use IPFS, and the perspective above barely scratches the surface. But we have to be careful not to be over-inclusive: if almost anything counts as being part of IPFS, what we have isn't an ecosystem but just a bag of unrelated stuff. We need concrete principles that define the ways in which a piece of software meaningfully participates in IPFS.
These principles provide detail to the key characteristics which we listed at the beginning:
1. Data is addressed by its contents
2. Which allows data use to be transport-agnostic.
### Content Addressing
Addressing is such an elementary part of any communication protocol that it is easy to overlook how its properties define the properties of a protocol. IP addresses are assigned based on a hierarchical authority delegated by [IANA](https://www.iana.org/) and [RIRs](https://en.wikipedia.org/wiki/Regional_Internet_registry) to network administrators for local assignment. HTTP builds atop `http` URLs, which are predicated on the domain name system delegating authority to a server, and then that server's operator having full ownership of the names in that space and of the resources they map to. This idea of hierarchy and ownership is deeply ingrained in the web's fundamental architecture documents and it has consequences for how the web works: not only is everyone dependent on DNS, but when visiting a URL you are interacting with a name and a resource that are explicitly defined as someone else's property. In turn, this gives that entity power in the relationship it has with its users.
IPFS's first defining characteristic is content addressing, and this is reflected in the foundational role that it gives to [CIDs](https://github.com/multiformats/cid). IPFS is, at heart, the space of resources that can be interacted with using a CID.
This already has multiple consequences. To begin with, CIDs are defined using [multiformats](https://multiformats.io/), which makes them future-proof, self-describing, and extensible. If, for instance, a powerful new hash algorithm surfaces then we aren't either stuck in the past or forced to find a way to upgrade everything. We can progressively roll it out on the IPFS network. Endpoints that need to produce or consume it will need to be upgraded, but the rest of the network won't care.
This approach also means that IPFS can interoperate with existing content-addressed systems, usually with little more work that what is required to convey whatever hash they use to a CID.
CIDs form a powerful and load-bearing foundation, while nevertheless being quite simple: Juan's original [CID](https://github.com/multiformats/cid#how-does-it-work) spec is detailed enough for implementation but barely runs to half a page of Markdown, including an enthusiastic parting comment about its simplicity: "*That's it!*"
![Juan says: "That's it!"](../assets/2023-thats-it.png)
By founding IPFS on CIDs we are paving the way to a [self-certifying web](https://jaygraber.medium.com/web3-is-self-certifying-9dad77fd8d81) that shifts power to people. No need to delegate authority or to give ownership over a location: the CID is a direct relationship between endpoints, between a person and the content that the CID points to. (And [IPLD](https://ipld.io/), which is also distinguished for its systematic reliance on CIDs as links, brings similar benefits to data.)
And content-addressing is key to enabling the other foundational characteristic, which we turn to next.
### Robustly Transport-Agnostic
Addressing content is nice, but it's often more useful if you can also use it to retrieve, move, compute over, or otherwise manipulate some data. Because IPFS is built on CIDs (self-certifying, remember?), we're free to use any transport layer without introducing concerns about the integrity of the content.
This transport-agnosticity makes the entire network more adaptable to local or specific needs, and it enables experimentation with a wide range of properties about how bytes are located and moved around. It future-proofs the system and makes it nimble when it has to be supported in new places, under new constraints. Developers don't even need to worry about building local- or offline-first: IPFS is both, *always*.
In taking this aproach, IPFS is revisiting and refreshing two older principles of protocol design. The first is the robustness principle, which has been expressed in different ways over the years but can be summarized as "*Be strict when sending and tolerant when receiving.*" While that formulation of the robustness principle has generally been accepted as unchallenged wisdom, it has recently [come under criticism in the protocol design community](https://datatracker.ietf.org/doc/html/draft-iab-protocol-maintenance). When a protocol is deployed at a large scale over many years and its implementations err on the side of being tolerant, what actually tends to happen is that interoperability defects accumulate over time until new implementations become too difficult to produce and the protocol starts to decay.
While we want to avoid protocol decay, some degree of tolerance is nevertheless desirable as it contributes to making the system adaptable to new situations. To address this, instead of being strict in one direction and tolerant in the other, IPFS is strict at the endpoints — where CIDs are produced or verified — and tolerant in between, open to any way that will get the bytes across.
This new take on robustness, which we might formulate as "*Be strict about the outcomes, be tolerant about the methods*," is an implementation of the [end-to-end principle](https://en.wikipedia.org/wiki/End-to-end_principle). The end-to-end principle states that the reliability properties of a protocol have to be supported at its endpoints and not in intermediary nodes. And that's exactly what CIDs enable.
## Clearer Foundations
Put together, content-addressing using CIDs and robust transport-agnosticity are what make IPFS what it is. An IPFS implementation that doesn't build atop the excellent [libp2p](https://libp2p.io/), that doesn't do everything that Kubo does, or that only retrieves verifiable content via HTTP gateways is still an IPFS implementation.
In order to help clarify both this foundation and everything that sits on top of it we've progressively been [developing better specs](https://github.com/ipfs/specs/), including a [fresh evolution of the IPIP process](https://github.com/ipfs/specs/commits/main/IPIP_PROCESS.md) and a [brand new specs site](https://specs.ipfs.tech/) (and [a IPFS Thing 2023 track to go with](https://2023.ipfs-thing.io/#Standards-Governance-and-DWeb-Policy)!)
Part of that specification work is this proposal for [a standardized description of the principles that define IPFS](https://specs.ipfs.tech/architecture/principles/). If you are curious to read a more detailed description of the principles described in this post, I encourage you to read it.
## See You Soon!
These are exciting times: solidifying our foundations empowers us to build higher and better. The [next IPFS Thing](https://2023.ipfs-thing.io/) is just a few weeks away, April 15th-19th, in Brussels. As a community, we'll be using that opportunity to share, discuss, and blaze forward with many new IPFS capabilities and implementations. We have no doubt that, from these CIDs, many flowers will grow.
## Appendix: Implementations
The [table of implementations at docs.ipfs.tech](https://docs.ipfs.tech/concepts/ipfs-implementations/) is the one being actively maintained and you should refer to that one if you are looking for a definitive source as the ecosystem changes fast. However, for illustrative purposes, here is the list of implementations (not counting all manners of tooling and other systems) as this blog post goes to press.
| Name | Language | What it's trying to do |
| -------- | -------- | -------- |
| [Elastic provider](https://github.com/ipfs-elastic-provider/ipfs-elastic-provider) | javascript, typescript | Scalable cloud-native implementation. |
| [Estuary](https://github.com/application-research/estuary/) | go | Daemon oriented service to pin and onboard IPFS data into Filecoin. |
| [Kubo](https://github.com/ipfs/kubo) | go | Generalist daemon oriented IPFS implementation with an [extensive HTTP RPC API](https://docs.ipfs.tech/reference/kubo/rpc/) and [HTTP Gateway API](https://docs.ipfs.tech/reference/http/gateway/). |
| [ipfs cluster](https://ipfscluster.io/) | go | Orchestration for multiple Kubo nodes via CRDT / Raft consensus |
| [iroh](https://github.com/n0-computer/iroh) | rust | Extreme-efficiency oriented IPFS implementation. |
| [Lotus](https://github.com/filecoin-project/lotus) | go | Filecoin node handling consensus, storage providing, making storage deals, importing data, ... |
| [Nabu](https://github.com/Peergos/nabu) | java | A minimal Java implementation of IPFS |
| [auspinner](https://github.com/2color/auspinner) | go | CLI tool to deal with the pinning service API and upload files through bitswap. |
| [barge](https://github.com/application-research/barge) | go | CLI tool with a git like workflow to upload deltas to estuary. |
| [Boost](https://github.com/filecoin-project/boost) | go | Daemon to get IPFS data in and out of a Filecoin storage provider. |
| [gomobile-ipfs](https://github.com/ipfs-shipyard/gomobile-ipfs) | go | Library oriented ipfs daemon to help embeding Kubo into a mobile app. |
| [helia](https://github.com/ipfs/helia) | javascript | A lean, modular, and modern implementation of IPFS for the prolific JS and browser environments, currently pre-alpha but [intended to replace js-ipfs](https://github.com/ipfs/js-ipfs/issues/4336). |
| [ipfs-embed](https://github.com/ipfs-rust/ipfs-embed) | rust | Small embeddable ipfs implementation. |
| [ipfs-lite](https://github.com/hsanjuan/ipfs-lite) | go | Minimal library oriented ipfs daemon building on the same blocks as Kubo but with a minimal glue layer. |
| [ipfs-nucleus](https://github.com/peergos/ipfs-nucleus/) | go | Minimal IPFS replacement for P2P IPLD apps. |
| [js-ipfs](https://github.com/ipfs/js-ipfs) | javascript, typescript | Javascript implementation targeting nodejs and browsers. [Development of js-ipfs is being discontinued](https://github.com/ipfs/js-ipfs/issues/4336). |
| [bifrost-gateway](https://github.com/ipfs/bifrost-gateway) | go | A lightweight [HTTP+Web Gateway](https://specs.ipfs.tech/http-gateways/) daemon backed by a remote data store. [Verifies CIDs](https://docs.ipfs.tech/reference/http/gateway/#trustless-verifiable-retrieval) and enables trusted (local) use of untrusted (remote) gateways. |

View File

@@ -0,0 +1,118 @@
---
title: What happens when half of the network is down?
description: "The IPFS DHT experienced a serious incident in the beginning of 2023, but users hardly noticed thanks to the power of a decentralized network!"
author: Yiannis Psaras
date: 2023-05-08
permalink: '/2023-ipfs-unresponsive-nodes/'
header_image: '/2023-05-ipfs-unresponsive-nodes-incident.jpeg'
tags:
- 'dht'
- 'decentralization'
- 'resource manager'
- 'nodes'
---
It depends on what type of system/network youre running. In 90% of networks, or networked systems, this is a grand-scale disaster. Alerts are popping up everywhere, engineers go far beyond “day-time work” to get things back to normal, customers are panicking and potentially leaving the platform and the customer care lines are on fire. Half of the network is a large fraction, but I would bet that the same would happen even when 10% or 20% of the network experiences an outage.
Its not like that when you run your services on a decentralized, distributed P2P network, such as IPFS! At the beginning of 2023, a critical component of the IPFS network, namely the public IPFS DHT, experienced a large-scale incident. *During this incident, [60% of the IPFS DHT Server nodes became unresponsive](https://github.com/protocol/network-measurements/blob/master/reports/2023/calendar-week-04/ipfs/plots/crawl-unresponsive.png).* Interestingly, **no content became unreachable and almost nothing in the network looked like the majority of the network was basically down**. We did observe a significant increase in the content routing/resolution latency (in the order of 25% initially), but this in no way reflected the scale of the event.
In this blog post, well go through the timeline of the event from “Detection” to “Root Cause Analysis” and give details about the engineering teams response. A summarizing talk on the content of this blog post was given at [IPFS Thing 2023](https://2023.ipfs-thing.io/) and can be found [on YouTube](https://youtu.be/8cGEjdCfm14).
## ❗Detection: we've got a problem!
> At the beginning of 2023, a critical component of the IPFS network, namely the public IPFS DHT, experienced a large-scale malfunction. *During this situation, [60% of the IPFS DHT Server nodes became unresponsive](https://github.com/protocol/network-measurements/blob/master/reports/2023/calendar-week-04/ipfs/plots/crawl-unresponsive.png).*
>
Unresponsive here means that nodes would seem to be online, they would accept connections from other nodes, but they wouldnt reply to requests. Basically, when a node would try to write to one of the unresponsive nodes, the unresponsive node would terminate the connection immediately.
Given that these nodes seemed to be functional, they occupied several places in other nodes routing tables, when in fact they shouldnt have.
The problem came down to a misconfiguration of the go-libp2p resource manager - a new feature that shipped with `kubo-v0.17`. The problematic configuration which was applied manually (i.e. was not based on the default values of `kubo-v0.17`) was set to such values that any attempt to interact with the nodes would be flagged as a resource exhaustion event and would trigger the corresponding “defense” mechanism. In practice, this materialized as a connection tear-down. It is worth noting that `kubo` is the most prevalent IPFS implementation using the public IPFS DHT with ~80% of nodes in the DHT being `kubo` nodes (see most recent [stats](https://github.com/protocol/network-measurements/tree/master/reports/2023/calendar-week-17/ipfs#agent-version-analysis)).
Content was still findable through kubo, so no alarms were raised. However, some of our research teams observed unusual error messages:
```go
> Application error 0x0 (remote): conn-22188077: system: cannot reserve inbound
connection: resource limit exceeded
```
Since PUT and GET operations were completing successfully, the error didnt seem like one that would trigger widespread panic. We were seeing slower performance than normal and had been investigating whether [recent changes with Hydra boosters](https://discuss.ipfs.tech/t/dht-hydra-peers-dialling-down-non-bridging-functionality-on-2022-12-01/15567) had bigger impacts than we were expecting. It was at this time that we had a physical meeting of our engineering teams and one of the items on the agenda was to figure out where this error was coming from.
## ❓ Diagnosis: what was happening?
We quickly realized that [there was a resource manager issue where the remote node was hitting a limit and closing the connection](https://github.com/libp2p/go-libp2p/issues/1928). After looking into the details of the resource manager and the error itself (i.e., `cannot reserve **in**bound connection`), we realized that the root cause of the issue was related to the remote node. It turned out that the resource manager was manually misconfigured by a very large percentage of nodes to values that were not in the default configuration by the “vanilla” version of the resource manager that shipped with `kubo-v0.17`.
As mentioned earlier, the GET and PUT operations were completing successfully, so our next step was to identify the scale of the problem. Our main goals were to figure out:
- what percentage of nodes in the network were affected
- if there was a performance penalty in either the PUT or the GET operation, or both
Through a combination of crawling the network and attempting connections to all ~50k DHT Server nodes (i.e., those that store and serve provider records and content), we found that close to 60% of the network had been affected by the misconfiguration. Clearly this was a very large percentage of the network, which made it urgent to look into the performance impact. We followed the below methodology:
1. We wanted to figure out which buckets in the nodes routing tables did the affected nodes occupy. We found that they occupied the higher buckets of the nodes routing tables, which meant that most likely PUTs would get slower, but GETs should not be affected too much. This is because the DHT lookup from the GET operation terminates when it hits *one* of the 20 closest peers to the target key, while the PUT operation terminates when it has found *all* the 20 closest peers. Since a significant portion of the network was unresponsive, the PUT operation hit at least one unresponsive node, but the GET operation had good chances of finding at least one responsive node within the 20 closest.
![output.png](../assets/ipfs-unresponsive-nodes-incident/output.png)
<br>
2. After further investigation and given the very large percentage of nodes that were affected by the resource manager misconfiguration, we started looking into the impact of the incident to the GET performance.
A GET request that hits one of the affected, unresponsive nodes would get the connection shut down by the remote, but would get stuck there until it timed out, at which point it would re-issue the request to another peer. The relatively high concurrency factor of the IPFS DHT (`alpha = 10`) helps in this case, as it means that for any given request up to 10 concurrent requests can be in flight. This helps a lot even with a high percentage of unresponsive nodes as it means that at least one of the 10 peers contacted will respond.
<br>
> This is because the DHT lookup from the GET operation terminates when it hits one of the 20 closest peers to the target key, when the PUT operation terminates when it has found all the 20 closest peers.
>
<br>In the meantime, we estimated that a non negligible number of GET requests were hitting at least one unresponsive node during the lookup process. This event results in a timeout and significantly increases the request latency. There is a high probability that an unresponsive node is encountered during the last hops of the DHT walk because unresponsive peers are mostly present in higher buckets as the above figure shows.
<br>
3. To quantify the impact, we crawled the network and gathered the PeerIDs of unresponsive nodes. We set up six kubo nodes in several locations around the globe and attempted to: i) publish content (PUT), and, ii) retrieve content (GET) for two cases: 1) when interacting with all nodes in the network, and, 2) when ignoring all responses from the unresponsive peers, whose PeerIDs we knew and were cross-checking with in real time.
- The results we found were as follows:
- The PUT operation was slowed down by approximately 10%
<br>
![output2.png](../assets/ipfs-unresponsive-nodes-incident/output2.png)
<br>
- The GET operation was also disrupted (in contrast to our initial assumption) and was slowed down by approximately 15%, at times reaching closer to 20%.
<br>
![output.png](../assets/ipfs-unresponsive-nodes-incident/output_1.png)
<br>
4. We also experimented with even higher concurrency factors, in particular with `alpha = 20`, as a potential mitigation strategy. We repeated the same experiment with one extra set of runs: the case where we interact with all nodes in the network (i.e., we do not ignore unresponsive peers), but have higher concurrency factor.
<br>We found that the performance increased and went back to pre-incident levels. However, it was decided *not* to go down this path, as the increased concurrency factor would: i) increase significantly the overhead/traffic in the DHT network, and, ii) stick with nodes that do not upgrade later on (when the incident is resolved) giving a clear advantage advantage to those nodes.
## 🚑 Mitigation: how we stopped the bleeding.
The teams immediate focus became:
1. [Adding/updating documentation on Kubos resource manager integration](https://github.com/ipfs/kubo/blob/master/docs/libp2p-resource-management.md)
2. Triaging and responding to user questions/issues ([example](https://github.com/ipfs/kubo/issues/9432))
3. Preparing a new kubo release (`v0.18.1`), where the default settings for the resource manager were set to more appropriate values. This reduced the likelihood that someone would need to adjust the resource manager configuration manually, thus avoiding the configuration “footguns”.
4. Encouraging as many nodes as possible to upgrade through public forums and direct relationships with known larger scale operators.
In parallel, we kept monitoring the situation by instrumenting a PUT and GET measurement experiment that was running since before the `kubo-v0.18.1` update, when the affected nodes started updating gradually.
`kubo-v0.18.1` was [released on the 2023-01-30](https://github.com/ipfs/kubo/releases/tag/v0.18.1) and within the first 10 days, more than 8.5k nodes updated to this release. Our monitoring software allowed us to have an accurate view of the state of the network and observed that the update to the new kubo release brought significant performance increase for the GET operation - more than 40% at the 95th percentile on a sample of ~2k requests, compared to the situation before the `kubo-v0.18.1` release.
![output.png](../assets/ipfs-unresponsive-nodes-incident/output_2.png)
We also monitored the situation compared to the pre-incident performance by running the experiment where we ignored the set of PeerIDs that were identified as affected by the misconfiguration. As a sample from more than 20k GET operations, in the figure below we show that the impact has reduced to ~5% (mid-February 2023).
![output.png](../assets/ipfs-unresponsive-nodes-incident/output_3.png)
## 🔧 Addressing the Root Cause
Our immediate actions managed to stop the bleeding and bring the network back to normal quickly. However, it was clear that we had to implement longer term fixes to protect the nodes routing tables from unresponsive peers and to avoid inadvertently making nodes unresponsive. Specifically this translated to:
1. Revamping the Kubo resource manager UX to further reduce the likelihood of catastrophic misconfiguration. This was completed in [Kubo 0.19](https://github.com/ipfs/kubo/releases/tag/v0.19.0#improving-the-libp2p-resource-management-integration).
2. Only adding peers to the routing table that are responsive requests [during the routing table refresh](https://github.com/libp2p/go-libp2p-kad-dht/pull/810) (done) and [upon adding a node to the routing table](https://github.com/libp2p/go-libp2p-kad-dht/issues/811) (in progress - targeting [Kubo 0.21 in May](https://github.com/ipfs/kubo/issues/9814)).
## 📖 Lessons Learned
In the days since, we have come away from this experience with several important learnings:
🗒️ Significant fundamental changes to the codebase (such as retroactively adding resource accounting) is ripe for disruption. This increases the necessity for documentation, announcements, and clear recommendations to node operators.
 Monitoring software should always be in place to help identify such events from the start.
📣 It is challenging to monitor and apply changes directly to the software that runs on nodes of a decentralized network. Well-established communication channels go a long way and help the engineering teams communicate directly with the community. In IPFS, we use a variety of channels including the Discord Server [[invite link](https://discord.gg/ipfs)], Filecoin Slack [[invite link](https://filecoin.io/slack)] (mostly in `#engres-ip-stewards` channel), the [Discourse discussion forum](https://discuss.ipfs.tech/), and the [blog](https://blog.ipfs.tech/).
🚀 Last, but certainly not least, the decentralized, P2P nature of IPFS kept the network running with all important operations completing successfully (albeit slower than normal). It is exactly because of the structure of the network that there are no single points of failure and performance is not catastrophically disrupted even when more than half of the network nodes are essentially unresponsive.

View File

@@ -0,0 +1,25 @@
---
title: ⛔️ js-IPFS deprecation / replaced by Helia 🌞
description: 'js-IPFS is being deprecated, and has been superseded by Helia.'
author: Alex Potsides (@achingbrain)
date: 2023-05-26
permalink: '/202305-js-ipfs-deprecation-for-helia/'
header_image: '/2023-05-js-ipfs-deprecation-for-helia-header-image.png'
tags:
- 'helia'
- 'js-ipfs'
---
**TL;DR: [js-IPFS](https://github.com/ipfs/js-ipfs) is being deprecated, and has been superseded by [Helia](https://github.com/ipfs/helia).**
There are exciting times ahead for IPFS in JS. Some of you may have already heard of [Helia](https://github.com/ipfs/helia), the new implementation that's designed as a composable, lightweight, and modern replacement for js-IPFS.
It has a [simplified API](https://ipfs.github.io/helia/interfaces/_helia_interface.Helia.html) which can be extended by other modules depending on the requirements of your application such as [@helia/unixfs](https://github.com/ipfs/helia-unixfs), [@helia/ipns](https://github.com/ipfs/helia-ipns), [@helia/dag-cbor](https://github.com/ipfs/helia-dag-cbor) and [others](https://github.com/ipfs/helia#-code-structure).
It ships with the latest and greatest libp2p, which means it has the best connectivity options, including the new [WebTransport](https://github.com/libp2p/js-libp2p-webtransport) and [WebRTC](https://github.com/libp2p/js-libp2p-webrtc) transports that dramatically improve the connectivity options for browser environments.
[js-IPFS is in the process of being deprecated](https://github.com/ipfs/js-ipfs/issues/4336) so you should port your apps to Helia to receive bug fixes, features, and performance improvements moving forwards.
📚 [Learn more about this deprecation](https://github.com/ipfs/js-ipfs/issues/4336) or [how to migrate](https://github.com/ipfs/helia/wiki/Migrating-from-js-IPFS).
More new blog content discussing Helia coming soon!

View File

@@ -0,0 +1,193 @@
---
title: IPFS Multi-Gateway Experiment in Chromium
description: A new approach to implementing ipfs:// and ipns:// support natively in the browser, using a client-only approach and fetching verifiable responses from multiple HTTP gateways.
author: John Turpish
date: 2023-06-01
permalink: "/2023-05-multigateway-chromium-client/"
translationKey: 2023-05-multigateway-chromium-client
header_image: "/multi-gateway-experiment.png"
tags:
- browsers
- chromium
---
[IPFS](https://ipfs.tech) is a protocol suite for a [content-addressed networking](https://en.wikipedia.org/wiki/Content-addressable_network). If you'd like to run a [node](https://docs.ipfs.tech/concepts/glossary/#node) and participate in the peer-to-peer network, by all means [give it a try](https://ipfs.tech/#install)!
The most important thing to get: With IPFS you can fetch something by a Content ID ([CID](https://docs.ipfs.tech/concepts/glossary/#cid)), which represents what it is, not where it's coming from.
The other way of fetching things from the IPFS ecosystem is through [IPNS](https://docs.ipfs.tech/concepts/ipns/#mutability-in-ipfs), which allows someone to cryptographically sign a reference to a CID, then you can request whatever content that person/organization is currently pointing to as their site.
Essentially, `http://` specifies "where" to find it, `ipfs://` specifies "what" to find, and `ipns://` specifies "whose" content to find.
What about people who don't know about IPFS, and just run across a [link](https://docs.ipfs.tech/concepts/glossary/#link)? What if they'd like to be able to use that link in their browser? This is where a "client" fits in - software that can talk to nodes to fetch the content they want, but without running one yourself.
## What is this all about?
Most IPFS clients talk to a particular HTTP [gateway](https://docs.ipfs.tech/concepts/glossary/#gateway). Multi-Gateway Clients proposed in [IPIP-359](https://github.com/ipfs/specs/pull/359) fulfill your requests using multiple [Trustless Gateways](https://specs.ipfs.tech/http-gateways/trustless-gateway/). This gives you more resilience, as you're not dependent on a single HTTP endpoint that can be censored or blocked by your ISP. It also can result in better performance, as you can multiplex requests that would typically run through a single server.
Here we're talking about [a project to implement IPFS in Chromium](https://github.com/little-bear-labs/ipfs-chromium). The result is an experimental racing multi-gateway client built directly into the browser, which means the same request might get sent to multiple Trustless Gateways, and the first one to get the result verified wins. And it's built into a custom-patched build of Chromium.
## Why build this?
This is by no means the first time IPFS has been usable in a browser, or even Chromium-based browsers in particular. Javier Fernández at Igalia has written some good explanations of other approaches that have been taken over at his blog in his post *[Discovering Chromes pre-defined Custom Handlers](https://blogs.igalia.com/jfernandez/2022/11/14/discovering-chromes-pre-defined-custom-handlers/)*, and there's an [overview on the IPFS blog](https://blog.ipfs.tech/14-11-2022-igalia-chromium/) as well.
Most of these approaches share in common the idea of translating IPFS and [IPNS](https://docs.ipfs.tech/concepts/glossary/#ipns) requests, 1:1, into HTTP requests. For example, if you have an HTTP gateway running locally on your machine, something like:
> ipfs://bafybeihpy2n6vwt2jjq5gusv23ajtilzbao3ekfb2hiev2xvuxscdxqcp4
might become:
> http://localhost:8080/ipfs/bafybeihpy2n6vwt2jjq5gusv23ajtilzbao3ekfb2hiev2xvuxscdxqcp4/
Or maybe it could become
>[https://ipfs.io/ipfs/bafybeihpy2n6vwt2jjq5gusv23ajtilzbao3ekfb2hiev2xvuxscdxqcp4/](https://ipfs.io/ipfs/bafybeihpy2n6vwt2jjq5gusv23ajtilzbao3ekfb2hiev2xvuxscdxqcp4/)
Or preferably (when [Origin isolation](https://en.wikipedia.org/wiki/Same-origin_policy) matters):
>[https://bafybeihpy2n6vwt2jjq5gusv23ajtilzbao3ekfb2hiev2xvuxscdxqcp4.ipfs.dweb.link/](https://bafybeihpy2n6vwt2jjq5gusv23ajtilzbao3ekfb2hiev2xvuxscdxqcp4.ipfs.dweb.link/)
In each case, you're delegating all the "IPFS stuff", including CID (hash) verification, to a particular node. This is effective for completing requests, but has some trade-offs, including the privacy and integrity risks when using a remote gateway provided by a third-party.
### Performance
If the gateway you're using happens to have the data you're seeking already on-hand, your performance will be great, since it can simply return to you what it already has. Perforance might even be better than the multi-gateway client, since no extraneous requests would be made. However, if you were unlucky, that gateway will have to spend more time querying the IPFS network to try to find the data you request before it gives up. The ideal gateway to use may very well depend on what you happen to be doing at the moment - and may differ from one of your tabs to another. A multi-gateway client will have the worst case performance more rarely.
And while we've been talking about "files" for the most part, IPFS breaks larger files down into "blocks". So you can apply these same techniques at the block level, and it's also conceivable that for a sufficiently large file which exists on multiple gateways you're talking to, a verifying multi-gateway client might be able to be faster than a single-gateway client, since you might be pulling down parts of the file from different sources concurrently. [RAPIDE](https://github.com/ipfs/go-libipfs-rapide/issues/12) is a more advanced in-development client which also makes use of this principle (along with other things). And it's showing promising results - watch a [recent talk from IPFS Thing by Jorropo](https://www.youtube.com/watch?v=Cv01ePa0G58) on it!
### Installation (vs. local gateway)
If you're reading this, installing a local node might seem like no big deal to you. However, we want IPFS to be accessible to people who haven't heard of it, and make it easy for them to click a link without having to think about which protocol-handling software they have installed ahead of time.
One approach is to have the browser install and start its own IPFS node. This is a pretty reasonable approach, but it can raise questions about when to dedicate resources to installation or the node's [daemon](https://docs.ipfs.tech/concepts/glossary/#daemon). The most notable example of this approach is [Brave](https://brave.com/ipfs-support/).
![Brave IPFS Choice](../assets/brave-choice.png)
However, regardless of whether the browser manages a [Kubo](https://github.com/ipfs/kubo#readme) node as Brave does or implements IPFS natively, the architecture of the application has changed in a significant way - *from being strictly a client, to being a server*.
Including HTTP-client-only IPFS capabilities in a Chromium-based browser doesn't change the installation experience in a noticeable way, nor require any major rethink of the browser security model.
### Security (vs. single public gateway)
Content-addressed networking involves a validation step to make sure that the data you received matches the [hash](https://docs.ipfs.tech/concepts/glossary/#hash) requested (it's a part of the CID). When you're requesting a file from an HTTP gateway, by default, the verification of the content is delegated to the node running the gateway. Further, if you receive the file in its final deserialized form as a response to a single request, naively using just an HTTP client, it's no longer possible to verify locally.
This is probably fine if the gateway you're talking to is one you're running locally. Presumably you trust that software as much as you trust your own browser.
The public IPFS gateways today appear to be consistently and reliably returning the correct results. Nonetheless the possibility exists, and it would be preferable if we didn't have to trust. That's why this experimental Chromium implementation uses the [Trustless Gateway](https://specs.ipfs.tech/http-gateways/trustless-gateway/) API and verifies the retrieved content locally.
## Where is the code?
In the repo you'll see separation between [component](https://github.com/little-bear-labs/ipfs-chromium/tree/main/component) and [library](https://github.com/little-bear-labs/ipfs-chromium/tree/main/library), where the former contains Chromium-specific code, and the latter contains code that helps with IPFS implementation details that can build without Chromium.
This distinction disappears when you switch over to the Chromium build. Both sets of source are dumped into a component (basically a submodule) called `ipfs`, that implements the handling of `ipfs://` and `ipns://` URLs.
Those who embed Chromium into another application generally provide an implementation of a couple of interfaces, namely `ContentClient` and `ContentBrowserClient`. They would need to add a little code to their implementations to use the `ipfs` component. Our repo contains a patch file which alters Chrome's implementations of these two as a demonstration to show how usage might work. That patch file might be useful as-is to someone who uses a patching approach to make a Chromium-derived browser.
## How (in more detail)?
### Hooking into Chromium
* The `ipfs://` and `ipns://` schemes are registered in [`ContentClient::AddAdditionalSchemes`](https://source.chromium.org/chromium/chromium/src/+/main:content/public/common/content_client.h;l=156?q=AddAdditionalSchemes), so the origin will be handled properly.
* An interceptor is created in [`ContentBrowserClient::WillCreateURLLoaderRequestInterceptors`](https://source.chromium.org/chromium/chromium/src/+/main:content/public/browser/content_browser_client.h;l=1733?q=WillCreateURLLoaderRequestInterceptors), which just checks the scheme, so `ipfs://` and `ipns://` navigation requests will be handled by `components/ipfs`.
* URL loader factories created for `ipfs` and `ipns` schemes in [`ContentBrowserClient::RegisterNonNetworkSubresourceURLLoaderFactories`](https://source.chromium.org/chromium/chromium/src/+/main:content/public/browser/content_browser_client.h;l=1503?q=RegisterNonNetworkSubresourceURLLoaderFactories), so in-page resources with `ipfs://` / `ipns://` URLs (or relative URLs on a page loaded as `ipfs://`), will also be handled by `components/ipfs`.
### Issuing HTTP(S) requests to Trustless Gateways
The detailed steps of the algorithm are laid out in [the design doc](https://github.com/little-bear-labs/ipfs-chromium/blob/main/DESIGN.md), but here's the basic idea:
* An IPFS link will have a CID in the URL. This is the [root](https://docs.ipfs.tech/concepts/glossary/#root) of its [DAG](https://en.wikipedia.org/wiki/Merkle_tree), which contains directly or indirectly all the info needed to get all the files related to the site, and will be the first [block](https://docs.ipfs.tech/concepts/glossary/#block) needed to access the file/resource.
* For any given block that is known to be needed, but not present in-memory, send requests to several gateways which haven't responded with an error for this CID yet and don't currently have pending requests to them. These requests have `?format=raw` so that we'll get just the one block (with `Content-Type` [application/vnd.ipld.raw](https://www.iana.org/assignments/media-types/application/vnd.ipld.raw)), not the whole file.
* When a response comes from a gateway, hash it according to the algo specified in the CID's [multihash](https://docs.ipfs.tech/concepts/glossary/#multihash). Right now, that has to be sha-256, and luckily it generally is. If the hashes don't match, the gateway's response gets treated much like an error - the gateway gets reduced in priority, and a new request goes out to a gateway that hasn't yet received this request.
* If the hashes are equal, store the block, process the block as described in Codecs (below). If the new node includes links to more blocks we also need, send requests for those blocks.
* When the browser has all the blocks needed, piece together the full file/resource and create an HTTP response and return it, as if it had been a single HTTP request all along.
### Codecs
If a CID is V0, we assume the [codec](https://docs.ipfs.tech/concepts/glossary/#codec) is [`dag-pb`](https://docs.ipfs.tech/concepts/glossary/#dag-pb) (see below). Other CIDs specify the codec, and right now we support these 2:
#### `raw` (`0x55`)
A block of this type is a blob - a bunch of bytes. We'll populate the body of the response with it.
#### `dag-pb` (`0x70`)
That's [ProtoBuf](https://protobuf.dev/)-encoded [Directed Acyclic Graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph). A block of this type is a node in a DAG, and contains some bytes to let you know what kind of node it is. There is one very special and important type of node ipfs-chromium deals with a lot:
##### UnixFS Node
The payload of these nodes have another ProtoBuf layer, and the DAG functions in a conceptually similar way to a read-only file system.
Not all kinds of UnixFS nodes are fully handled yet, but we cover these:
###### File (simple case)
These nodes each have a `data` byte array that is the contents of a file. We'll use those bytes as the body of a response.
###### File (multi-node)
In UnixFS a node can represent a file as the concatenation of other file nodes, to which it has `links`. The decision to use this kind of node generally has to do with the size of the file. A single node can't be much more than a megabyte, so files larger than that get cut into chunks and handled as a tree of nodes. There are a couple of reasons for that:
* Data deduplication (it's possible the same sequences of bytes, and thus same CID, appears in multiple files or even within the same file)
* In the case that a gateway were malicious, we wouldn't want to wait until a file of potentially unbounded size finishes downloading before we verify that it's correct. "ipfs-chromium" enforces a limit of 2MB per block.
* It enables the possibility that one could concurrently fetch different parts of the file from different gateways.
If we have all the nodes linked-to already, we can concatenate their data together and make a response body out of it. If we don't, we'll convert the missing links to CIDs and request them from gateways.
###### Directory (normal)
In this case the `data` field isn't really important to us. The `links`, however, represent items in the directory.
* If your URL has a path, find the `link` matching the first element in the path, and repeat the whole process with that `link`'s CID and the remainder of the path.
* If you don't have a path, we'll assume you want `index.html`
* If there's no `index.html` we'll generate an directory listing HTML file for you.
###### [HAMT](https://en.wikipedia.org/wiki/Hash_array_mapped_trie) (sharded) Directory
This is for directories with just too many entries in them to fit in a single block. The links from this directory node might be entries in the directory or they might be other HAMT nodes referring to the same directory (basically, the directory itself is getting split up over a tree of nodes).
* If you're coming in from another HAMT node, you might have some unused bits of the hash to select the next child.
* If you have a path, hash the name of the item you're looking for, pop the correct number of bits off the hash, and use it to select which element you're going to next.
* If you don't have a path, we'll assume you want `index.html`.
* We don't generate listings of sharded directories today, and this isn't a high-priority as it's an unreasonable use case.
### Dealing with ipns:// links
The first element after `ipns://` is the "[ipns-name](https://specs.ipfs.tech/ipns/ipns-record/#ipns-name)".
* If the name is formatted as a CIDv1, and has its codec set to `libp2p-key` (`0x72`), ipfs-chromium will retrieve a [signed IPNS record](https://specs.ipfs.tech/ipns/ipns-record/#ipns-record) of what it points at from a gateway, and then load that content.
* The cryptographic signature in the record is verified using the public key, which corresponds to the "ipns-name"
* Note: only two [multibase](https://docs.ipfs.tech/concepts/glossary/#multibase) encodings are fully supported for now: base36 and base32. If your IPNS or DNSLink record points to something base58 that should work, but otherwise avoid it (don't use it in the address bar!).
* If the name is not formatted as a CIDv1, a DNS request is created for the appropriate TXT record to resolve it as a [DNSLink](https://dnslink.dev/).
IPNS names may point to other IPNS names, in which case this process recurses. More commonly they point at an IPFS DAG, in which case ipfs-chromium will then load that content as described above.
## Bottom Line
So, in the end, the user gets to treat `ipfs://` links to snapshotted data like any other link, gets the result in a reasonable timeframe, and can rely on the data they get back being the correct data.
`ipns://` URLs of the DNSLink variety rely only on DNS being accurate.
Regular `ipns://` URLs, however, are verified by the cryptographically signed [record](https://specs.ipfs.tech/ipns/ipns-record/).
## Trying it out
If you want to try this yourself today, you can [build it](https://github.com/little-bear-labs/ipfs-chromium/blob/main/BUILDING.md) from source, or you may install a pre-built binary from [GitHub releases](https://github.com/little-bear-labs/ipfs-chromium/releases/) or [an IPFS gateway](https://gateway.ipfs.io/ipfs/QmdsmW9pSM8kQsnwFpHrqQFskv6H26XzhnZWYHGVdAfcbm).
If you'd just like to see it in action, here are the links I use in the video below:
* `ipfs://bafybeigchjo5f3jyzfjwmbavhr27jwdhu6wwhsodxg4qq4x72aasxewp64/blog.html` - a snapshot of this blog post
* `ipns://k51qzi5uqu5dkq4jxcqvujfm2woh4p9y6inrojofxflzdnfht168zf8ynfzuu1/blog.html` - a mutable pointer to the current version of this blog
* `ipns://docs.ipfs.tech` - The IPFS documentation.
* `ipns://en.wikipedia-on-ipfs.org/wiki/` - Wikipedia, as a big HAMT + DNSLink
* `ipns://ipfs.io` - an unusual case: a DNSLink to another DNSLink
* `https://littlebearlabs.io` - an HTTPs URL for comparison.
<iframe width="70%" src="https://www.youtube.com/embed/9XJOktFizlo" frameborder="1" allow="accelerometer; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
## When could this be widespread?
This is very experimental, and will not be in mainstream browsers tomorrow. Feel free to vote for [the issue](https://bugs.chromium.org/p/chromium/issues/detail?id=1440503) where we discuss its future.
## Who is doing this?
[Little Bear Labs](https://littlebearlabs.io) and [Protocol Labs](https://protocol.ai)

View File

@@ -0,0 +1,170 @@
---
title: A Rusty Bootstrapper
description: 'Running rust-libp2p-server on one of our four IPFS bootstrap nodes.'
author: Max Inden (@mxinden)
date: 2023-07-24
permalink: '/2023-rust-libp2p-based-ipfs-bootstrap-node/'
header_image: ''
tags:
- 'Kademlia'
- 'Rust'
---
# Summary
As of July 13, 2023, one of the four "public good" IPFS bootstrap nodes operated by Protocol Labs has been running [rust-libp2p-server](https://github.com/mxinden/rust-libp2p-server) instead of [Kubo](https://github.com/ipfs/kubo), which uses [go-libp2p](https://github.com/libp2p/go-libp2p/). rust-libp2p-server is a thin wrapper around [rust-libp2p](https://github.com/libp2p/rust-libp2p). We run both Kubo and rust-libp2p-server on IPFS bootstrap nodes to increase resilience. A bug or vulnerability is less likely to be in both Kubo and rust-libp2p-server than Kubo alone. In addition to increasing resilience, we gain experience running large rust-libp2p based deployments on the IPFS network.
![rust-libp2p bootstrap node establishing its first connections](../assets/2023-07-rust-libp2p-based-ipfs-bootstrap-node-connections-established.png)
# IPFS Public DHT Bootstrap Nodes
_What is an IPFS bootstrap node?_
> A Bootstrap Node is a trusted peer on the IPFS network through which an IPFS node learns about other peers on the network. [...]
See [IPFS Glossary](https://docs.ipfs.tech/concepts/glossary/#bootstrap-node).
A new node trying to join the "[public IPFS DHT](https://github.com/ipfs/ipfs/discussions/473)", i.e. trying to bootstrap, will:
1. Connect to its (pre-) configured bootstrap nodes.
2. Run some variation of the [Kademlia bootstrap process](https://github.com/libp2p/specs/tree/master/kad-dht#bootstrap-process) which boils down to iteratively:
1. Generating random IDs.
2. Asking already discovered nodes whether they know anyone closer to those IDs.
Thus the only thing that an IPFS bootstrap node needs to do is:
- Allow incoming connections.
- Maintain a healthy Kademlia routing table.
- Reply to Kademlia `FIND_NODE` requests based on nodes in its routing table.
Let's dive a bit deeper. In the case of Kubo the [DNSAddr](https://github.com/multiformats/multiaddr/blob/master/protocols/DNSADDR.md) addresses of the IPFS bootstrap nodes are shipped within the release binary.
``` go
// DefaultBootstrapAddresses are the hardcoded bootstrap addresses
// for IPFS. they are nodes run by the IPFS team. docs on these later.
// As with all p2p networks, bootstrap is an important security concern.
var DefaultBootstrapAddresses = []string{
"/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
// ...
}
```
See [`bootstrap_peers.go` on github.com/ipfs/kubo](https://github.com/ipfs/kubo/blob/v0.21.0/config/bootstrap_peers.go#L11C1-L24C2).
One can resolve those `/dnsaddr/...` through iterative DNS queries. Below is an example for the node with the peer ID `QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb`. This IPFS bootstrap node is running Kubo.
```
dig +short -t txt _dnsaddr.bootstrap.libp2p.io
"dnsaddr=/dnsaddr/am6.bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb"
[...]
```
```
dig +short -t txt _dnsaddr.am6.bootstrap.libp2p.io
"dnsaddr=/ip6/2604:1380:4602:5c00::3/udp/4001/quic-v1/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb"
[...]
```
Finally connecting to the bootrap node shows us the protocols it supports.
Below example uses [`libp2p-lookup`](https://github.com/mxinden/libp2p-lookup/) but `ipfs swarm connect` followed by `ipfs id` can be used instead.
```
libp2p-lookup direct --address /ip6/2604:1380:4602:5c00::3/udp/4001/quic-v1/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb
Lookup for peer with id PeerId("QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb") succeeded.
Protocol version: "ipfs/0.1.0"
Agent version: "kubo/0.20.0/b8c4725"
Listen addresses:
- "/ip6/2604:1380:4602:5c00::3/udp/4001/quic-v1"
- [...]
Protocols:
- /ipfs/kad/1.0.0
- [...]
```
Note the `Agent version: "kubo/0.20.0/b8c4725"` and the supported protocols `Protocols: - /ipfs/kad/1.0.0`.
# Motivation
_Why run both Kubo and rust-libp2p2-server bootstrap nodes?_
This choice is influenced by three main areas: the benefit of diverse implementations, the opportunity to test rust-libp2p at large scale, and the presence of Rust in the IPFS network.
Implementation Diversity: Operating both Kubo and rust-libp2p-server bootstrap nodes contributes to the network's overall resilience and security. It's like having a second line of defense; if one system encounters an issue, the other is there to continue functioning. For instance, a recent bug impacted Kubo IPFS bootstrap nodes, closing incoming connections, right after their successful establishment, due to a QUIC version mismatch. By using both Kubo and rust-libp2p-server, we ensure that nodes can still join the network, even if one set of bootstrap nodes is unavailable.
Testing Rust-Libp2p at Large Scale: Our use of rust-libp2p-server also provides a valuable opportunity to examine how it behaves at a larger scale. Software performance can vary depending on scale, and these differences are hard to predict without actual real-world deployments. Now we can gain insights similar to those we acquired from other large deployments such as [Polkadot](github.com/paritytech/polkadot/) and [Ethereum](https://blog.libp2p.io/libp2p-and-ethereum/).
Encouraging Rust in the IPFS Network: Lastly, by operating a rust-libp2p bootstrap node, we hope to motivate other developers to build IPFS-based applications using rust-libp2p. This could lead to an increase in the use of Rust, fostering a more diverse and vibrant ecosystem.
# rust-libp2p(-server) in Action
_What is rust-libp2p(-server) and how does it operate as an IPFS bootstrap node?_
[rust-libp2p](https://github.com/libp2p/rust-libp2p) is an implementation of the libp2p specification in Rust, a popular systems programming language. The rust-libp2p project was [initiated around 2018](https://www.parity.io/blog/why-libp2p) and since then, it has powered network like Ethereum through its Rust implementation [Lighthouse](https://github.com/sigp/lighthouse) and [Polkadot](github.com/paritytech/polkadot/) along with the [Substrate](https://github.com/paritytech/substrate/) ecosystem. You can find more rust-libp2p users [here](https://github.com/libp2p/rust-libp2p#notable-users).
[rust-libp2p-server](https://github.com/mxinden/rust-libp2p-server/) is just thin wrapper around rust-libp2p. It combines rust-libp2p's TCP, QUIC and Kademlia-DHT implementation into a single binary. Looking up the new rust-libp2p-server IPFS bootstrap node `ny5` via [`libp2p-lookup`](https://github.com/mxinden/libp2p-lookup/) confirms just that. Note the `Agent version: "rust-libp2p-server/0.12.0"`. and `Protocols: - /ipfs/kad/1.0.0`.
```
libp2p-lookup direct --address /dnsaddr/ny5.bootstrap.libp2p.io
Lookup for peer with id PeerId("QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa") succeeded.
Protocol version: "ipfs/0.1.0"
Agent version: "rust-libp2p-server/0.12.0"
Listen addresses:
- [...]
Protocols:
- /ipfs/kad/1.0.0
- [...]
```
## Some Numbers
On the new bootstrap node we see around 15 new inbound connections per second. The majority of these connections are established via QUIC (see `ip4/udp/quic`).
![rust-libp2p bootstrap node establishing its first connections](../assets/2023-07-rust-libp2p-based-ipfs-bootstrap-node-new-incoming-connections.png)
The node is handling > 30k connections concurrently, thus being connected to roughly [15% of nodes of the public IPFS DHT](https://probelab.io/ipfsdht/#client-vs-server-node-estimate).
![rust-libp2p bootstrap node establishing its first connections](../assets/2023-07-rust-libp2p-based-ipfs-bootstrap-node-connections-established.png)
Across these connections the node handles around 40 Kademlia requests per second, most of which are Kademlia `FIND_NODE` requests.
![rust-libp2p bootstrap node establishing its first connections](../assets/2023-07-rust-libp2p-based-ipfs-bootstrap-node-new-incoming-kademlia-requests.png)
The number of connections does not have a significant impact on CPU usage of the underlying machine.
![rust-libp2p bootstrap node establishing its first connections](../assets/2023-07-rust-libp2p-based-ipfs-bootstrap-node-cpu.png)
The node uses `< 300 kbyte` of memory per connection.
![rust-libp2p bootstrap node establishing its first connections](../assets/2023-07-rust-libp2p-based-ipfs-bootstrap-node-memory.png)
A small tangent: in case you are interested in more IPFS public DHT metrics, take a look at the [probelab DHT metrics and reports](https://probelab.io/ipfsdht/).
# Closing
If you want to learn more:
- Read up on the [libp2p project](https://libp2p.io/)
- Explore the [rust-libp2p implementation](https://github.com/libp2p/rust-libp2p)
- See the thin rust-libp2p wrapper at [mxinden/rust-libp2p-server](https://github.com/mxinden/rust-libp2p-server/)
- And lastly, the [public IPFS DHT measurements](https://probelab.io/ipfsdht/) are always a good read
A lot of this work was done by [@mcamou](https://github.com/mcamou) from the [Protocol Labs EngRes Bifrost team](https://pl-strflt.notion.site/Bifrost-2423fee6b15243158e85e35d8e22241d?pvs=4). Mario has handled the deployment and the team is operating the bootstrap nodes as a whole. Thanks, [@mcamou](https://github.com/mcamou) and team!
FAQ:
- Do I have to use the default bootstrap nodes?
No, you don't have to use `/dnsaddr/bootstrap.libp2p.io`. You can remove Protocol Labs' default nodes and add your own or use both for better reliability.
- Do we plan to run rust-libp2p-server on all IPFS bootstrap nodes?
No.

View File

@@ -0,0 +1,92 @@
---
title: An Observatory for the IPFS Network
description: 'The ProbeLab team has built a resilient and fully-automated infrastructure to monitor the performance of core IPFS protocols!'
author: Yiannis Psaras (@yiannisbot)
date: 2023-08-03
permalink: '/2023-ipfs-observatory/'
header_image: '/blog-post-probelabio.png'
tags:
- measurements
- DHT
- IPNI
---
# tl;dr
The ProbeLab team has worked hard over the past year to build a resilient and fully-automated infrastructure to monitor the performance of core IPFS stack protocols. All of what we have so far lives at [https://probelab.io](https://probelab.io) with results being auto-updated on a daily basis.
## Why measurement work is important
Measuring operational networked systems is the cornerstone of system reliability, stability, and great user experience. Unless someone measures the performance of their system, it is very difficult to spot problems and inconsistencies between protocol design and actual operation. Most importantly, it is very difficult to be able to direct engineering effort to the right direction in order to solve actual problems, deal with bottlenecks, and eventually improve performance.
System and network measurements are normally straightforward when there is a single (or a few) points of control. The task becomes significantly more challenging in the case of open-source, decentralized, and permissionless systems such as IPFS where there is no single point of control (or any gatekeeping entity).
## ProbeLab
This is where our teams efforts come into the picture. ProbeLab is focusing on protocol measurement, benchmarking, and optimization for Web3.0 protocols in general and IPFS in particular. During the past year we strived to build all the necessary tooling and backend infrastructure in order to be able to reliably measure the most critical aspects of the decentralized network. To avoid this being our own knowledge and instead share our findings with the community, weve also built a public-facing front-end where our key results are being reported on a daily basis: [https://probelab.io](https://probelab.io)
One key thing that makes [https://probelab.io](https://probelab.io) different to a dashboard is that there is detailed explanation of the measurement methodology and the measurement setup, so that viewers can understand whether the results theyre observing fit their own setup and use-case. Ultimately, [https://probelab.io](https://probelab.io) should become the point of reference for engineers, as well as executives that are running (or are considering running) their applications on top of the IPFS network.
## Our focus (so far)
Apart from [several](https://www.notion.so/pl-strflt/Optimistic-Provide-07ce632c6de54aec953ec0e9ca2bbcf5?pvs=4) [protocol](https://github.com/plprobelab/network-measurements/blob/master/results/rfm16-bitswap-discovery-effectiveness.md) [optimization](https://github.com/plprobelab/network-measurements/blob/master/results/rfm15-nat-hole-punching.md) projects that ProbeLab has taken up so far, our primary focus on the measurements front has been the main component that supports decentralized content routing in the IPFS protocol stack, that is, the IPFS Public DHT network. Our focus has not been a random pick, but instead a thoughtful consideration given that this is where performance has been mostly unknown and mostly unpredictable - until now!
That said, we have extended our efforts to other critical parts of the architecture, such as the [InterPlanetary Network Indexers](https://docs.ipfs.tech/concepts/ipni/), and we plan to add more components to our monitoring infra in the near future.
Sample projects where our measurement infrastructure has helped the ecosystem tremendously are:
- **The Hydra Dial Down:** [Hydra Boosters](https://github.com/libp2p/hydra-booster) are a special type of DHT server node designed to accelerate content routing performance in the IPFS network. They were introduced in 2019 and were intended as an interim solution while exploring other DHT scalability techniques. The IPFS DHT network and its supporting protocols have advanced significantly since then, and the (not insignificant) cost of operating Hydras was put in question by our team. We have found that Hydras improve content routing performance by not more than 10-15% on average, which was considered minor, compared to its operational cost. The team carried out a progressive dial down of Hydras after communicating our intentions with the community (see [details](https://discuss.ipfs.tech/t/dht-hydra-peers-dialling-down-non-bridging-functionality-on-2022-12-01/15567)) and confirmed our performance estimates of a Hydra-less network. You can find an explanatory talk of our measurement estimates at IPFS Camp 2022 [here](https://www.youtube.com/watch?v=zhzxJGoLTg0) and the full project report [here](https://github.com/protocol/network-measurements/blob/master/results/rfm21-hydras-performance-contribution.md).
- **Unresponsive Nodes Incident:** ProbeLabs measurement work and tooling has proven critical for incidents that nearly brought the IPFS network to its knees. Around January 2023, a software misconfiguration resulted in more than 50% of IPFS DHT network nodes becoming unresponsive. Through rigorous measurement and analysis of the measurement results, the engineering teams have chosen the right next steps to resolve the situation in record time, something that would have been significantly more difficult without the numbers that the ProbeLab team has provided. You can read all of the details regarding the incident, the response, and the measurements that our team carried out in [this previous blog post](https://blog.ipfs.tech/2023-ipfs-unresponsive-nodes/).
## ProbeLab Tooling
Our primary tooling is open-source and linked from the same website under: [https://probelab.io/tools/](https://probelab.io/tools/). There are detailed “how to” guides for each tool so that community members can get familiar and start using them for their own studies. The tools we have used so far include:
- [`Nebula`](https://probelab.io/tools/nebula/): a libp2p DHT crawler and monitor that is designed to track the liveliness and availability of peers.
- [`Parsec`](https://github.com/plprobelab/parsec): a DHT and IPNI performance measurement tool that is used to gather accurate data on the performance of DHT and IPNI lookups and publications.
- [`Tiros`](https://github.com/plprobelab/tiros): a retrieval and rendering metrics measurement tool of websites loaded over IPFS. It is designed to help developers monitor and optimize the performance of their IPFS-hosted websites. It also measures and compares the IPFS metrics with their HTTPS counterparts.
## What we know now that we didnt know before
The plots and experiments at [https://probelab.io](https://probelab.io) offer visibility into lots of aspects that were not visible at all beforehand, or at least were not widely available. Our monitoring and observation of IPFSs primary content routing components performance over the last couple of quarters reveals that at the time of writing:
- More than 25k DHT Server peers stay online for more than 80% of the time of a given week [[link to plot](https://probelab.io/ipfskpi/#dht-availability-classified-overall-plot)]
![dht-availability-classified-overall.png](../assets/2023-08-ipfs-observatory-dht-availability-classified-overall.png)
- Despite the above fact, the churn rate in the network is rather high with 80% of DHT Server peers leaving the network in 3hrs or less after they appeared online [[link to plot](https://probelab.io/ipfsdht/#dht-peers-churn-cdf-overall-plot)]
![dht-peers-churn-cdf-overall (1).png](../assets/2023-08-ipfs-observatory-dht-peers-churn-cdf-overall.png)
- The Median DHT Lookup Performance (i.e., the time to first provider record) is at 600ms as measured from 7 different geographical regions. It is worth highlighting that the lookup performance from the EU and North America, where most DHT nodes reside, is significantly better than other regions and stands at 200-250ms [[link to plot](https://probelab.io/ipfsdht/#dht-lookup-performance-cdf-region-plot)].
![dht-lookup-performance-cdf-region.png](../assets/2023-08-ipfs-observatory-dht-lookup-performance-cdf-region.png)
- Websites hosted on IPFS are served faster over kubo than HTTP for those well-performing regions (EU and North America) [[link to plot](https://probelab.io/websites/#websites-http-comparison-ttfb-p90)].
![websites-http-comparison-ttfb-p90.png](../assets/2023-08-ipfs-observatory-websites-http-comparison-ttfb-p90.png)
- The [cid.contact](http://cid.contact) IPNI maintains a stable lookup performance below the 300ms mark at the P90 for uncached content and across all 7 regions [[link to plot](https://probelab.io/ipni/cid.contact/#ipni-snapshot-uncached-latencies-cdf-cidcontact-plot)].
![ipni-snapshot-uncached-latencies-cdf-cidcontact.png](../assets/2023-08-ipfs-observatory-ipni-snapshot-uncached-latencies-cdf-cidcontact.png)
## Where to find more
Head over to [https://probelab.io](https://probelab.io) to dive into all the results and explanation of the experiments.
It is worth noting that we do not provide commentary on the results presented on the website itself. Instead, discussion around results reported at [https://probelab.io](https://probelab.io) is taking place at the [IPFS Discussion Forum](https://discuss.ipfs.tech/c/testing-and-experiments/measurements/39).
You can reach out to the ProbeLab team (e.g., if youre interested contributing to the measurement effort, or have a request) through:
- the `#probe-lab` channel in IPFS Discord [[invite link](https://discord.gg/ipfs)], or Filecoin Slack [[invite link](https://filecoin.io/slack)] (bridged channel).
- The teams email: [probelab@protocol.ai](mailto:probelab@protocol.ai)
We also hold bi-weekly Office Hours, where we invite the community and our collaborators to join and bring up questions, challenges they face and topics for discussion. You can sign up through [this lu.ma page](https://lu.ma/ipfs-network-measurements).
<!-- ## A guide for website owners hosting with IPFS
Last, but not least, we have developed an in-depth methodology to monitor performance of websites hosted on IPFS. We are currently monitoring most of PLs websites and provide a breakdown of web access performance metrics (primarily using [Web Vitals](https://web.dev/vitals/)). This is very helpful for monitoring overall performance, but especially for debugging in case of poor performance, or errors while fetching website content.
++ linking to the howto guide as well as how to use it, if we finalise and decide to include. -->

View File

@@ -0,0 +1,110 @@
---
title: Amino (the Public IPFS DHT) is getting a facelift
description: 'The ProbeLab team is working on a major refactoring of the Public IPFS DHT (henceforth called Amino) and a new feature which will accelerate the provide operation by several orders of magnitude. Read through to find out the details and how to get involved.'
author: ProbeLab
date: 2023-09-26
permalink: '/2023-09-amino-refactoring/'
header_image: '/2023-09-amino-refactoring.png'
tags:
- 'Amino'
- 'IPFS DHT'
- 'Reprovide Sweep'
---
Two major items are being announced in this blogpost, both of which are equally exciting and relate to “the Public IPFS DHT” (the [public Kademlia-based DHT](https://docs.ipfs.tech/concepts/dht/#dual-dht) that [Kubo (and other implementations) default to bootstrapping into](https://docs.ipfs.tech/how-to/modify-bootstrap-list/) with the libp2p protocol `/ipfs/kad/1.0.0`), which is henceforth going to be called **“Amino”**. The first relates to a major refactoring of the Amino codebase and the second is an optimization of the publish operation of the protocol, so that providing content to Amino is made much faster and resource-efficient.
## Why Amino?
The “Public IPFS DHT” is henceforth going to be called **“Amino”**. This follows along with the trend from 2022 in the IPFS ecosystem to use more precise language to create space for alternative options (i.e., other DHTs). Just as there isnt one IPFS implementation, there isnt one content routing system or DHT. “Amino” was chosen from Amino acids - the building block for larger, stronger structures, which is what we believe will happen with this network. There can be several IPFS DHT networks, and they can choose to borrow functionality from the “Amino” network. More context on the naming can be found [here](https://github.com/ipfs/ipfs/discussions/473).
## Refactoring of `go-libp2p-kad-dht` codebase
It has long been realized that the current go implementation of libp2ps Distributed Hash Table (DHT), which is used by IPFS implementations like Kubo and other projects/platforms, is in need of a major revision. The problems that have been identified by core maintainers and community contributors alike can be summarised in the following:
1. Several years of adding extra features to the codebase and iterations of core functionality have made the DHT faster and more efficient, but have also added substantially to its complexity. It has now become more **difficult to understand and make changes to the code**, which indirectly is pushing developers away from contributing to it.
2. **Flaky tests due to concurrency issues**. Unit tests, which evaluate if the implementation is working as expected, are difficult to implement due to extensive parallelization of several parts of the code.
3. Lack of unit tests in turn make it **hard to carry out performance evaluation tests**. This has recently resulted in performance evaluation results that are hard to understand or act upon - Bitswaps `Provider Search` delay is a good example here [[link](https://github.com/ipfs/kubo/pull/9530)].
4. The current implementation is carrying a **non-negligible amount of technical debt** that was acquired over the years. For instance, Kademlia should only handle Kademlia identifiers (256-bits bitstrings) internally, but it is currently using strings [[source](https://github.com/libp2p/go-libp2p-kad-dht/blob/b63ad6096833d36b365f1361edab871f6cdc283c/query.go#L83)].
The [PL EngRes IPFS Stewards team](https://www.notion.so/IPFS-f3c309cecfd844e788d8b9e13472a97b?pvs=21) has been working on a **major refactoring of `go-libp2p-kad-dht`**. In this context, a new library, `go-libdht` defines the basic building blocks for implementing DHTs, and will be used by the refactored `go-libp2p-kad-dht`. The goal of the refactoring project is to address the above challenges. In particular,
- make the code base easy to modify and improve by making it single-threaded.
- allow for sequential, deterministic code execution, making debugging easier, testing more reliable and simulation/reproducibility possible and,
- get rid of unnecessary code and complexity.
### Expected Changes & Timeline
The refactored codebase is being worked on in the [v2-develop branch of go-libp2p-kad-dht](https://github.com/libp2p/go-libp2p-kad-dht/tree/v2-develop). The current progress, next tasks and open issues can be found at this project board: [https://github.com/orgs/plprobelab/projects/1](https://github.com/orgs/plprobelab/projects/1). The refactored code is expected to be completed, tested and ready for integration into Kubo for further testing during the first half of October.
Where possible, we aim to remain compatible with version 1. There are no breaking protocol changes planned, and we expect to be able to adhere to the standard Routing interface used by Kubo. The libp2p Kademlia implementation has been battle tested through many years of use, and we want to take advantage of the learnings from that real-world usage while improving the ergonomics and clarity of the code. However, we will be taking this opportunity to look closely at the current code interfaces and to propose improved or new ones.
Most of the changes being made are internal to the operation of the DHT. Were creating a new state machine oriented execution model that is very different to the existing implementation. This allows us to bound work and resources more cleanly and prioritize work performed more appropriately. Performance will also be different and, for the initial release, our goal is for this to be similar to the current codebase. However, we expect the new execution model will give us more scope for optimization in the future. Having better control over the scheduling of work will also allow the new implementation to continue to perform well under resource pressure and high load.
## Making Reprovides to Amino lightning fast
Content providers with a large number of CIDs to provide to the DHT have traditionally been facing difficulties. The current PUT operation in `go-libp2p-kad-dht` lacks resource efficiency. For every CID being reprovided, the provider performs a lookup and initiates a connection with the top 20 nearest peers *sequentially*. In practice, this means that if a peer needs to be contacted twice for two CIDs, the providing peer needs to open two connections to the same peer at different points in time within the same reprovide task.
In turn, this results in significant bandwidth requirements and deters large content providers from advertising their content on Amino (the IPFS DHT) due to cost constraints. The sequential manner in which reprovides take place can result in content providers failing to refresh all content within the 48h provider record expiration interval [[link to source](https://github.com/libp2p/go-libp2p-kad-dht/blob/b63ad6096833d36b365f1361edab871f6cdc283c/providers/providers_manager.go#L38)][[link to spec](https://github.com/libp2p/specs/tree/master/kad-dht#content-provider-advertisement-and-discovery)], rendering the content inaccessible.
Our approach is to optimize the provide process, making it much less resource intensive. This will pave the way for a significantly larger throughput in the number of "provides".
### High level design of `ReprovideSweep`
The base premise of `ReprovideSweep` is that **all keys located in the *same keyspace region* are reprovided *all at once,** instead of sequentially,* which is currently the case. This is in contrast to the status quo of re-providing in the current IPFS DHT, where the provider record of each CID is sent out separately, though a new connection.
Given that some large Content Providers are publishing way more CIDs than there are DHT Servers, by the [pigeonhole principle](https://en.wikipedia.org/wiki/Pigeonhole_principle) there must be DHT Servers that are allocated more than one Provider Record, by a particular Content Provider. The primary rationale is to send/re-provide all Provider Records allocated to the same DHT Server *******at once, instead of having to revisit the same server later on, re-establish a connection, and store the provider record*******.
However, because sending multiple Provider Records requires a new RPC causing a breaking change, it isnt trivial to send all Provider Records exactly *at once.* That said, the most expensive part in a (Re)Provide operation is the DHT walk to discover the right DHT Servers to store the Provider Records on, as well as opening new connections to these peers. Once these peers are known, and a connection is already open, the Content Provider can simply reuse the same connection to send multiple individual `Provide` requests, thereby avoiding breaking changes while still reaping performance gains.
The `go-libp2p-kad-dht` DHT implementation must keep track of the CIDs that must be republished every `Interval` (lets assume that all Provider Records are republished at the same frequency). The Kademlia identifiers of the CIDs to republish must be arranged in a [binary trie](https://github.com/guillaumemichel/py-binary-trie) to allow for faster access. As each Provider Record is replicated on 20 different DHT Servers, 20 DHT Servers in a close locality are expected to store the same Provider Records (this is not 100% accurate, but suffices for our high-level description here - well publish all the details in a subsequent post, when the solution is in production).
In a nutshell, the Content Provider will continuously lookup keys across the entire keyspace, hence “sweeping” the keyspace. For each key that is to be published, the Content Provider will find the 20 closest peers, and lookup in its “CIDs Republish Binary Trie” all Provider Records that would belong to those specific 20 remote peers. Doing this match-making exercise, content providers will be able to reprovide all provider records that correspond to a particular peer at once. Based on this logic, Content Providers are only limited by network throughput.
You can watch a recording from [IPFS Thing 2023](https://2023.ipfs-thing.io/) explaining the concept in more detail [here](https://youtu.be/bXaL64fp55c?si=1LuukjErCG_bz02N).
### `ReprovideSweep` Performance
`ReprovideSweep` is not implemented yet, hence, we can only approximate its performance analytically. In the tables below we see that `ReprovideSweep` is improving performance significantly on all fronts and important metrics, assuming that the number of CIDs (`#CIDs`) that a provider wishes to publish is much larger than the number of DHT Server nodes in the network (`#DHT_SERVERs`), i.e. `#CIDs >> #DHT_SERVERs`:
- The number of DHT Lookups is reduced from being equal to the number of CIDs to be published, down to 1/20th of the number of DHT Server nodes in the network.
- The number of connections that need to be opened is also reduced and is equal to the number of DHT Server nodes (if the number of CIDs to be provided is much larger than the number of server nodes in the network).
- As we see in the second table, assuming a network size of ~25k DHT Server nodes, the overall improvement in terms of number of connections open and number of DHT Lookups is significant reaching an improvement of ~800x for 1M CIDs.
| | Current Reprovide | Reprovide Sweep |
| --- | --- | --- |
| Number of DHT lookups | #CIDs | ~1/20 * #DHT_SERVERs |
| Number of connections to open | 20 * #CIDs | #DHT_SERVERs |
| #CIDs published | Improvement (#connections, #DHT Lookups) |
| --- | --- |
| > 1K | - |
| 25K | 20x |
| 100K | 80x |
| 500K | 400x |
| 1M | 800x |
| 10M | 8000x |
### Expected Changes & Timeline
We are very excited about this change because it will enable large content providers to start using the most resilient and decentralized component of the IPFS network.
**This change is a client side optmization and doesnt involve any protocol alteration.** As such, it allows users to immediately benefit from the feature. The interface between `go-libp2p-kad-dht` and [`boxo`](https://github.com/ipfs/boxo), which Kubo uses, must be updated to enable the DHT client to take on the responsibility of managing the reprovide operation.
The PL EngRes IPFS Stewards team is currently working to define the spec for `ReprovideSweep`, which we hope to have ready in the beginning of October, and we anticipate rolling out this enhancement during Q423. We will update the community with a new blogpost or discussion forum post closer to the time. Until then, you can follow developments on this front through this GH issue: [https://github.com/libp2p/go-libp2p-kad-dht/issues/824](https://github.com/libp2p/go-libp2p-kad-dht/issues/824).
## Whats next
We believe the above lays the groundwork for more exciting DHT innovation ahead. We have some ideas that wed love to be talking about and working with the community. Were still figuring out the best place for this conversation, but subscribe [here](https://discuss.ipfs.tech/t/dht-discussion-and-contribution-opportunities-in-2023q4/16937) if youre interested in learning about upcoming DHT discussion areas (e.g., at [LabWeek](https://labweek.plnetwork.io/)/[DevConnect](https://devconnect.org/), DHT working group). You can also join the team's Office Hours by subscribing at: [https://lu.ma/ipfs-network-measurements](https://lu.ma/ipfs-network-measurements).
## How to get involved
As always, help is more than welcome to accelerate development and make the design more robust through feedback. Here are ways you can get involved:
- Github repository:
- DHT Refactoring: [https://github.com/plprobelab/go-kademlia/](https://github.com/plprobelab/go-kademlia/)
- Reprovide Sweep: [https://github.com/libp2p/go-libp2p-kad-dht/issues/824](https://github.com/libp2p/go-libp2p-kad-dht/issues/824)
- Slack channel:
- `#probe-lab` in [FIL Slack](https://filecoin.io/slack) or [IPFS Discord](https://discord.gg/vj7qWuAyHY) (bridged channel), or
- `#kubo-boxo-dev` in FIL Slack
- IPFS Discussion forum:
- DHT Refactoring and future planning: [https://discuss.ipfs.tech/t/dht-discussion-and-contribution-opportunities-in-2023q4/16937](https://discuss.ipfs.tech/t/dht-discussion-and-contribution-opportunities-in-2023q4/16937)

View File

@@ -0,0 +1,72 @@
---
title: Announcing the Content Tracks for IPFS Thing 2023
description: 'An overview of the content tracks that the community will convene around during IPFS Thing 2023.'
author:
date: 2023-3-29
permalink: '/2023-ipfs-thing-content-tracks/'
header_image: '/2023-3-29-ipfs-thing-content-tracks.jpg'
tags:
- 'ipfs thing'
- 'event'
---
The IPFS implementers community will be gathering together in Brussels, Belgium in just a few weeks for [IPFS Thing 2023](https://2023.ipfs-thing.io/submit/).
Today were excited to finally share the final list of the content tracks so you can know what to expect! Each track will have a variety of talks and discussions from members of the community.
If you havent registered yet, head on over to [the event website](https://2023.ipfs-thing.io/) to grab your tickets today! [IPFS Thing 2023](https://2023.ipfs-thing.io/) is happening from **April 15-19 in Brussels, Belgium** and will include everything from talks, workshops, discussion circles, hacking time, and more.
## Opening Keynotes
During this opening session, we'll hear an overview of the latest implementations, tools, and advancements across the world of IPFS, and celebrate the winners of the IPFS Impact Grants Round 2. (Track lead: [Mosh Lee](https://twitter.com/mishmosh))
## Standards, Governance, and DWeb Policy
This track sits at the intersection of IPFS standards, governance, and dweb and regulation. What's the latest on the IPFS protocol and governance? What specific problems do we face regarding existing regulation? What new regulation or changes could be helpful? Are there interesting policy angles that we can surface, develop, and advocate for? How do we make the dweb a robust, sustainable commons? (Track lead: [Robin Berjon](https://mastodon.social/@robin
))
## IPFS Deployments + Operators
From best practices to the mistakes made along the way, this track is a chance to highlight how members of the community are running IPFS nodes at scale. Let's share what's working well and what implementations can do to make things even better! (Track lead: [James Walker](https://twitter.com/walkah))
## Interplanetary Databases
Theres a new class of distributed database technologies building atop steady advances in IPLD & hash linked data structures in general. In this track well gather those brave enough to take on CAP theorem in a decentralized context, share notes on whats working, and hear presentations from teams pushing the envelope on what databases can do and where they can exist. (Track lead: [J Chris](twitter.com/jchris))
## Data Transfer
Come join the Protocol Thunderdome as we battle to determine the best way to move content addressed bytes! We'll review recent progress in data transfer, including work coming out of the Move The Bytes Working Group, and explore how we can make IPFS 10x faster at getting your stuff than Web2! (Track lead: Hannah Howard)
## Measuring IPFS
A data-driven approach to the design and operation of IPFS and libp2p through rigorous network measurements, performance evaluation, and recommendations for builders and operators. (Track lead: Yiannis Psaras)
## IPFS on the Web
The world wide web is both the biggest deployment vector and least tractable surface for IPFS. There are opportunities and major challenges to bringing IPFS support in web rendering engines and browsers, to web content served through gateways, to IPFS network access from HTTP web apps and browser extensions. This track will have talks on: current and future browser implementations, approaches to managing and publishing IPFS content on the web, building apps that connect to the IPFS from within HTTP contexts, culminating in planning for group working sessions around on specific IPFS+Web challenges on day 4 & 5 of IPFS Thing. (Track lead: [Dietrich Ayala](https://twitter.com/dietrich/))
## Integrating IPFS
IPFS is not an island - it exists in diverse environments, manifesting in different ways depending on the use-case, ranging from mobile devices to blockchains to naming systems, even soon in space. These integration points provide interesting opportunities to explore the capabilities of IPFS and muse on what IPFS even is. Well hear from folks on what theyre doing, whats working, and ponder how far we can flex IPFS to fit the multitude of places it needs to be. (Track lead: Ryan Plauche)
## Decentralized Compute & AI
We believe computing and AI can become more powerful and useful by embracing content addressing and a “merkle-native” way of doing things. In this track, we'll discuss various projects in this area, sharing R&D experiences, future directions, use cases, and benefits. (Track lead: [Iryna Tsimashenka](https://twitter.com/iryna_it09))
## Content Routing
Approaches and protocols to content routing in IPFS, what we've learned so far, and directions for the future. Join this track to explore herding CIDs, bringing content providers closer to the seekers of content, new advances across content routing systems, and a fresh look at the horizon of what's to come. (Track lead: Masih Derkani)
## HTTP Gateways
How do we deliver IPFS content to the masses? In this track, we'll dive into the magical and maddening topic of HTTP Gateways. Topics include the evolving semantics of /ipfs/cid, .car blocks and rendered flat files, and large-scale efforts to improve gateway architectures such as Project Saturn and Project Rhea. (Track lead: [Will Scott](https://twitter.com/willscott/))
## Roadmapping Next Steps out of the IPFS þing
A discussion / breakout-oriented workshop for defining and committing to next steps out of the week's conversations, which we can land and celebrate at upcoming IPFS events in Q3 / Q4 2023. (Track lead: [Molly Mackinlay](https://twitter.com/momack28))
---
We're looking forward to seeing you all soon and exploring these exciting content tracks together in-person!
Have a talk or workshop to share? You can also [submit a talk](https://2023.ipfs-thing.io/submit/) though April 5.

View File

@@ -0,0 +1,26 @@
---
title: Brave Browser's New IPFS Infobar
description: 'Were excited to share a new IPFS-related feature that appears in the most recent version of Brave.'
date: 2023-09-25
header_image: '/braveinfobar2.png'
tags:
- brave
- browsers
---
Were excited to share a new IPFS-related feature that appears in the most recent version of [Braves web browser](https://brave.com/). A new IPFS Infobar will appear at the top of the browser when you visit an IPFS compatible resource such as a [CID on a public gateway](https://docs.ipfs.tech/how-to/address-ipfs-on-web/#http-gateways) or a website with a [DNSLink](https://docs.ipfs.tech/concepts/dnslink/).
By using the IPFS Infobar, you can choose whether you would like to switch to loading the IPFS version of the content. Selections can be made to always load via IPFS or only load it in a specific instance.
![](../assets/brave_infobar_2.jpg)
![](../assets/brave_infobar_3.png)
This new feature will increase visibility of IPFS content when it exists and contribute to greater awareness for the benefits that can be had from utilizing content addressing.
Braves IPFS Infobar is a small but mighty new feature that we are excited to see in the wild!
In addition to the Infobar, there are more tools currently being developed for [Brave](https://brave.com/) by others such as [David Justice](https://github.com/JusticeEngineering) that are worth noting. Some of the prototypes include: Markdown/Wysiwyg webpage creator, Link In Bio/Link List site creator, and the ability to password protect webpages with many more ideas in the works.
[https://github.com/JusticeEngineering/markdown-publish](https://github.com/JusticeEngineering/markdown-publish)
[https://github.com/JusticeEngineering/link-list](https://github.com/JusticeEngineering/link-list)

View File

@@ -0,0 +1,30 @@
---
title: Content Blocking for the IPFS stack is finally here!
description: 'Were excited to share that content blocking can now be enabled in Kubo and other tools in the IPFS stack.'
author: The Bifrost Team
date: 2023-04-26
permalink: '/2023-content-blocking-for-the-ipfs-stack/'
header_image: '/release-notes-placeholder.png'
tags:
- 'go-ipfs'
- 'kubo'
- 'badbits'
- 'content-blocking'
- 'content-moderation'
---
Bifrost (the Protocol Labs NetOps team responsible for the IPFS.io HTTP gateways) is happy to announce that content blocking can now be enabled in Kubo and other tools in the IPFS stack.
Traditionally, content blocking has been performed only at the IPFS gateway level and directly in Nginx, using the original [Badbits denylist](https://badbits.dwebops.pub/denylist.json). This had a few issues: content on the denylist was not blocked on Kubo and was still available via Bitswap. Additionally, blocking affected concrete CID strings, but not equivalent ones (i.e. those with a different base encoding).
In order to resolve these issues and to make a long term commitment to improving how we do content moderation in IPFS, we have taken the following steps:
- [Submitted IPIP-383](https://github.com/ipfs/specs/pull/383), which defines a much more flexible and efficient compact denylist format. This new format supports different block types and sets a foundation for future work on denylist transparency, sharing, and distribution. For example, every blocked item can now have tags attached to provide metadata such as the reason for the blocking. IPFS implementations can then choose whether to expose that information or not.
- [Implemented NOpfs](https://github.com/ipfs-shipyard/nopfs), a Blocker that understands the new compact denylist format and decides whether any CID or IPFS Path should be blocked or not. This Blocker implementation also provides a Kubo plugin which gives Kubo the ability to never download blocked content. NOpfs can also be used separately from Kubo by setting a Web service that returns whether an IPFS path or URL should be blocked or not (upcoming work from our side). This can also be useful for Filecoin Storage providers and anyone who wants to make sure their CIDs have not been included in a denylist.
In the meantime, we have converted our existing denylist to the new format so that everyone can take advantage of these changes right away: [https://badbits.dwebops.pub/badbits.deny](https://badbits.dwebops.pub/badbits.deny)
This work is the framing for a larger endeavour to improve content moderation on the IPFS public networks. If you have any questions, need help, or would like to collaborate, then please [reach out via GitHub on IPIP-383](https://github.com/ipfs/specs/pull/383) or [NOpfs](https://github.com/ipfs-shipyard/nopfs)! If youd like to help further this initiative, you can start by sharing this news with your community and by letting the Kubo maintainers know that youd like to see this functionality integrated into Kubo as a first class citizen.
And last of all, it would be remiss of us if we didnt thank [Hector](https://twitter.com/hecturchi) for all the hard work he put into this. Thank you for all your efforts... they are greatly appreciated!
The Bifrost Team.

View File

@@ -0,0 +1,505 @@
---
title: 'How to Host Dynamic Content on IPFS'
description: 'This article presents a design for hosting dynamic content on IPFS using IPLD, IPNS, and DHT Provider Records.'
author: tabcat
date: 2023-05-17
permalink: '/2023-how-to-host-dynamic-content-on-ipfs/'
header_image: '/hosting-dynamic-content.png'
tags:
- 'dynamic-content'
- 'hosting'
- 'ipld'
- 'ipns'
- 'dht'
---
The InterPlanetary File System (IPFS) is a distributed, peer-to-peer file system designed to make the web faster, safer, and more resilient. Although IPFS excels at hosting static content, hosting dynamic content remains a challenge. This article presents a design for hosting dynamic content on IPFS using InterPlanetary Linked Data (IPLD), InterPlanetary Name Service (IPNS), and DHT Provider Records.
## Table of Contents
<!-- TOC start -->
- [Understanding Key Components](#understanding-key-components)
* [IPLD](#ipld)
* [IPNS](#ipns)
* [PeerID](#peerid)
* [Provider Records](#provider-records)
- [Defining the Problem](#defining-the-problem)
- [Achieving Dynamicity](#achieving-dynamicity)
* [Read and Write Steps](#read-and-write-steps)
+ [Writing](#writing)
+ [Reading](#reading)
* [Dynamic-Content IDs](#dynamic-content-ids)
* [Manifest Document](#manifest-document)
- [Use-case: Edge-computed Applications](#use-case-edge-computed-applications)
* [Edge Devices](#edge-devices)
* [Pinning Servers](#pinning-servers)
* [Replication](#replication)
- [Roadblocks and Workarounds](#roadblocks-and-workarounds)
* [No 3rd Party Publishing to DHT](#no-3rd-party-publishing-to-dht)
* [No Delegated Refreshing of IPNS OR Provider Records](#no-delegated-refreshing-of-ipns-or-provider-records)
- [Example](#example)
* [Usage](#usage)
+ [Clone the Repo](#clone-the-repo)
+ [Install Packages](#install-packages)
+ [Run Examples](#run-examples)
* [What's Happening?](#whats-happening)
* [Sample Outputs](#sample-outputs)
- [Credits](#credits)
- [Get Involved](#get-involved)
- [FAQ](#faq)
<!-- TOC end -->
<br/>
## Understanding Key Components
### IPLD
[IPLD](https://ipld.io/) is a data model for linking and addressing data across distributed systems. In IPFS, IPLD stores immutable data, providing [content-addressed storage](https://en.wikipedia.org/wiki/Content-addressable_storage). Data stored in IPLD has a unique [Content Identifier](https://docs.ipfs.tech/concepts/content-addressing/) (CID) derived from its content, ensuring data integrity.
### IPNS
[IPNS](https://docs.ipfs.tech/concepts/ipns/) is a decentralized naming system that allows you to create a mutable reference to an immutable CID. With IPNS, you can create a persistent address that always points to the latest version of your content, even as it changes over time.
### PeerID
A [Libp2p PeerID](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) is a unique identifier for each node in the network, derived from a [public key](https://en.wikipedia.org/wiki/Public-key_cryptography). PeerIDs help find, identify, and communicate with other nodes.
### Provider Records
[Provider Records](https://docs.ipfs.tech/concepts/dht/) are a fundamental part of IPFS's Distributed Hash Table (DHT). When requesting IPFS content, a node queries the DHT for Provider Records associated with the requested CID. These records contain the PeerID of peers with the content, enabling the user to establish a connection and retrieve the data.
---
> **It's important to note that IPNS names and PeerIDs use the same [key structures](https://specs.ipfs.tech/ipns/ipns-record/#ipns-keys).**
---
<br/>
## Defining the Problem
Databases on IPFS have been gaining more attention recently. In essence, these database protocols use IPLD to store replica data.
And they commonly use a real-time protocol like [Gossipsub](https://docs.libp2p.io/concepts/pubsub/overview/) with IPLD to sync database changes peer-to-peer.
Using this design to create [local-first](https://www.inkandswitch.com/local-first/) databases looks quite promising.
However, local-first databases are often highly [sharded](https://en.wikipedia.org/wiki/Partition_(database)) and run on end-user devices.
This presents the problem of peers being few and unreliable to sync with.
One solution is to add reliable database peers to the mix, either self-hosted or hosted by a service.
There are two disadvantages to this approach:
- Each project must build infra tooling
- Users need a live instance of each database protocol used
It would benefit all related protocols to have a general solution for asynchronous replication of dynamic content.<br/>
*Think pinning layer for dynamic content.*
This standardized layer would complement the app-specific protocols used for real-time replication.
<br/>
## Achieving Dynamicity
Let's look at a replication algorithm for one of the first databases on IPFS, [OrbitDB](https://github.com/orbitdb). The algorithm is roughly as follows:
1. Join a shared pubsub channel for the database.
2. On seeing a new pubsub peer in the shared channel, attempt to join a direct pubsub channel ([ipfs-pubsub-1on1](https://github.com/ipfs-shipyard/ipfs-pubsub-1on1)).
3. On committing an update to the local replica, advertise replica root CIDs on each direct pubsub channel.
4. On receiving a replica root CIDs advertisement on a direct pubsub, traverse the remote replica for changes to merge.
The design presented in this article works similarly but replaces pubsub with Provider Records and IPNS. Essentially, all parts of replication get encoded into ~persistent IPFS components.
- Provider Records to find collaborators
- IPNS to point to the latest version of a device's replica
---
> **Swapping pubsub for ~persistent components makes building on history without any collaborators online possible.**
---
The main contribution is the novel use of Provider Records.
Instead of tying a CID to PeerIDs of nodes hosting that content, the records tie a "Dynamic-Content ID" to IPNS names.
Each IPNS name resolves to the latest CID of a device's local replica.
*Collaborating on dynamic content is possible without knowing any previous collaborators or needing them to be online as long as their replica data is kept available via a pinner.*
If you are familiar with publishing Provider Records to the DHT, *you may have spotted a problem here*.
The source of the problem is a check DHT servers do when receiving an `ADD_PROVIDER` query, addressed in [Roadblocks and Workarounds](#roadblocks-and-workarounds).
<img src="https://raw.githubusercontent.com/tabcat/dynamic-content/master/.assets/dynamic-content-diagram.png" width="333">
---
> **The Merkle-DAGs built with IPLD provide a persistent and efficient layer for collaborators to sync.**
---
<br/>
### Read and Write Steps
Describes the process of reading/writing dynamic content to IPFS:
#### Writing
1. Make changes to the local replica
2. Push replica data to the IPLD pinner
3. Republish IPNS to point to new CID root
4. Add IPNS key as a provider of the Dynamic Content's ID
#### Reading
1. Query the DHT for Providers of the Dynamic Content's ID
2. Resolve providers' IPNS keys to CIDs
3. Resolve CIDs to IPLD data
4. Merge changes with the local replica
---
> **Titling this article 'Replication on IPFS' might have been more accurate, but 'Hosting Dynamic Content on IPFS' sounded waaay better.**
---
<br/>
### Dynamic-Content IDs
A Dynamic-Content ID (DCID) looks like a CID. Also, both DCIDs and CIDs reference and identify content on the DHT.
*Where the two IDs differ is in their creation.*
While CIDs come from the hash of some static content, DCIDs are a permutation of the CID of a manifest document.
This immutable manifest document "describes" the dynamic content.
As stated in the previous section, DCIDs identify unique dynamic content.
They point to IPNS names by using Provider Records on the DHT.
---
> **Disclaimer: Dynamic-Content IDs, or DCIDs, only exist for the purpose of this article. It is not an official spec or part of IPFS. (expect a name change because I also hate "DCIDs" 🤢🤮)**
---
<br/>
### Manifest Document
Manifest documents, a term from OrbitDB, describe some unique dynamic content.
Manifests are immutable and contain information like the protocol and parameter used.
This document format is not formally specified, but included below is a specification for this article:
**dag-cbor**
```js
// cbor type reference: https://www.rfc-editor.org/rfc/rfc8949.html#section-3.1
{
protocol: type 3
param: type 5
}
```
`protocol`: a text string field containing a protocol id
`param`: a key value map for exclusive use by the `protocol`
```js
// takes description of the dynamic content (protocol + params)
// returns manifest (Block) and dynamic-content id (CID)
export async function DynamicContent (
{ protocol, param }: { protocol: string, param: any }
):
Promise<{ id: CID, manifest: BlockView }>
{
// create manifest
const manifest = await Block.encode({ value: { protocol, param }, codec, hasher })
// create dcid
const dynamic = new TextEncoder().encode('dynamic')
const bytes = new Uint8Array(dynamic.length + manifest.cid.multihash.digest.length)
bytes.set(dynamic)
bytes.set(manifest.cid.multihash.digest, dynamic.length)
const dcid = CID.create(
manifest.cid.version,
manifest.cid.code,
await hasher.digest(bytes)
)
return { id: dcid, manifest }
}
```
Above is a code block from the example attached to this article.
It shows a manifest document "describing" the dynamic content using the `protocol` and `param` properties.
It also shows the DCID derived from the manifest's CID.
<br/>
## Use-case: Edge-computed Applications
This design is particularly useful when paired with local-first databases.
These databases are partitioned (a.k.a. sharded) to only the interested parties.
It's common for only a few collaborators to be a part of a database, and there may be long periods without any of them online.
This context makes it challenging to build upon the history of collaborators, a challenge this design can potentially solve.
### Edge Devices
- Handle application logic and merging of replicas from other collaborators.
- Consist of a network of potentially unreliable peers that may come online and go offline at various times.
- Ensure the application history is available by commanding pinning servers.
### Pinning Servers
- Reliable storage servers that keep dynamic content available on IPFS.
- Pin IPLD replicas, and refresh IPNS and Provider Records for clients.
- Execute no app-specific code
### Replication
The design presented in this article is a replication protocol.
However, it is not a real-time replication protocol.
Applications with real-time features should include an app-specific replication protocol for use with other online collaborators.
Combining two replication protocols with these properties results in preserved and real-time P2P applications.
---
> **Pinning servers, in this context, provide a general and reliable replication layer to fall back on when no other collaborators are online.**
---
<br/>
## Roadblocks and Workarounds
It should be clear now that using Provider Records this way was not intended.
This brings us to the roadblock...
### No 3rd Party Publishing to DHT
[DHT servers validate that the PeerIDs inside received Provider Records match the PeerID of the node adding them.](https://github.com/libp2p/specs/tree/master/kad-dht#rpc-messages)
This check makes adding Provider Records for multiple PeerIDs to the DHT difficult.
Not great if you want to participate in multiple pieces of dynamic content as each will require its own IPNS name.
A Libp2p node may only add its own PeerId as a provider. This PeerId is also known as the "self" key.
There are two workarounds for now:
1. Use the "self" key for IPNS, and have it point to a CID for a map(DCID -> root replica CID) for all relevant dynamic content.
2. Spin up *ephemeral* libp2p nodes to refresh each IPNS name as a provider every [22hours](https://github.com/libp2p/specs/tree/master/kad-dht#content-provider-advertisement-and-discovery).
### No Delegated Refreshing of IPNS OR Provider Records
Delegated publishing of IPNS and Provider Records is necessary to realize the edge-computed applications use case.
Unfortunately, there are no official plans to add this feature.
<br/>
## Example
---
> **USES HELIA 😲🤩 !!!! DHT IN 😵‍💫 JAVASCRIPT 😵‍💫 😵 !! DYNAMIC CONTENT ON IPFS!?🧐!?**
---
This example shows dynamic-content replication using IPLD, IPNS, and Provider Records. There are 3 [helia](https://github.com/ipfs/helia) (IPFS) nodes running in a single script, named `client1`, `client2`, and `server`. `client1` and `client2` dial `server` and use the `/ipfs/kad/1.0.0` protocol. After dialing, clients can add IPNS and Provider records to the DHT server. Clients also add IPLD data to `server` programmatically.
![](../assets/hosting-dynamic-content-mermaid-3.png)
---
> **`client1`, `client2`, and `server ` are all in memory Helia nodes created by a single script.**
> **IPLD data is added to the server by clients by accessing `server.blockstore.put` from within the script (programmatically). As opposed to using an HTTP API like in any real use-case.**
---
### Usage
- Requires [npm and Node v18](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
#### Clone the Repo
`git clone https://github.com/tabcat/dynamic-content.git`
#### Install Packages
`npm install`
#### Run Examples
There are two example scripts. One is interactive, meaning after the example runs, a REPL starts with global variables available to operate the replication manually.
The scripts are `npm run example` and `npm run interactive`.
**If something is broken please open an [issue](https://github.com/tabcat/dynamic-content/issues)!**
<br/>
### What's Happening?
The example consists of 3 [Helia](https://github.com/ipfs/helia) nodes, named `client1`, `client2`, and `server`.
The `server` represents a reliable machine used as a
1. IPLD pinning server
2. DHT server
---
> **IPNS and Provider records are both stored in the DHT.**
---
The clients are unreliable machines used to read and write dynamic content.
In the example, `client1` does all the writing, and `client2` does all the reading.
![](../assets/hosting-dynamic-content-mermaid-4.png)
<br/>
This is a very high overview of what's going on.
Remember, this design uses only IPLD/IPNS/Provider Records.
It may be helpful to read [index.ts](./src/index.ts) (~200 LOC) for clarity.
### Sample Outputs
In case you are unable to run the example, below shows all the output that would occur:
<details>
<summary>`npm run example`</summary>
```sh
$ npm run example
> dynamic-content@1.0.0 example
> npm run build && node dist/index.js
> dynamic-content@1.0.0 build
> tsc
server is pinning ipld and serving dht ipns and provider records
client1: online
client1: added new values to set { nerf this }
client1: set state: { nerf this }
client1: encoded to raw data
client1: pushed data to pinner
client1: published ipns:12D3KooWRzE1FNCRXuz1C8Z3G8Q5oBg3C5nhKANSsFq377P1mWVn with value cid:bafyreihypffwyzhujryetatiy5imqq3p4mokuz36xmgp7wfegnhnjhwrsq
client1: advertised ipns:12D3KooWRzE1FNCRXuz1C8Z3G8Q5oBg3C5nhKANSsFq377P1mWVn as set provider
client1: offline
--- no peers online, Zzzzz ---
client2: online
dht query returned empty response
client2: found ipns:12D3KooWRzE1FNCRXuz1C8Z3G8Q5oBg3C5nhKANSsFq377P1mWVn as set provider
client2: resolved ipns:12D3KooWRzE1FNCRXuz1C8Z3G8Q5oBg3C5nhKANSsFq377P1mWVn to bafyreihypffwyzhujryetatiy5imqq3p4mokuz36xmgp7wfegnhnjhwrsq
client2: resolved ipfs:bafyreihypffwyzhujryetatiy5imqq3p4mokuz36xmgp7wfegnhnjhwrsq to raw data
client2: decoded raw data
client2: added new values to set { nerf this }
client2: set state: { nerf this }
client2: offline
```
</details>
<details>
<summary>`npm run interactive`</summary>
<br/>
The interactive example starts a REPL after the example has run.
```sh
$ npm run interactive
> dynamic-content@1.0.0 interactive
> npm run build && node dist/interactive.js
> dynamic-content@1.0.0 build
> tsc
server is pinning ipld and serving dht ipns and provider records
client1: online
client1: added new values to set { nerf this }
client1: set state: { nerf this }
client1: encoded to raw data
client1: pushed data to pinner
client1: published ipns:12D3KooWQXCo6Wzw7NmJRLC2peAX7fU6gHSydEKNAJfyfXCEwHFL with value cid:bafyreihypffwyzhujryetatiy5imqq3p4mokuz36xmgp7wfegnhnjhwrsq
client1: advertised ipns:12D3KooWQXCo6Wzw7NmJRLC2peAX7fU6gHSydEKNAJfyfXCEwHFL as set provider
client1: offline
--- no peers online, Zzzzz ---
client2: online
dht query returned empty response
client2: found ipns:12D3KooWQXCo6Wzw7NmJRLC2peAX7fU6gHSydEKNAJfyfXCEwHFL as set provider
client2: resolved ipns:12D3KooWQXCo6Wzw7NmJRLC2peAX7fU6gHSydEKNAJfyfXCEwHFL to bafyreihypffwyzhujryetatiy5imqq3p4mokuz36xmgp7wfegnhnjhwrsq
client2: resolved ipfs:bafyreihypffwyzhujryetatiy5imqq3p4mokuz36xmgp7wfegnhnjhwrsq to raw data
client2: decoded raw data
client2: added new values to set { nerf this }
client2: set state: { nerf this }
client2: offline
--- interactive example ---
client1: online
client2: online
Usage:
globals
help: this message
client1: helia client node (sender)
client2: helia client node (receiver)
server: helia ipld/ipns pinner and dht server
// compare the 2 clients sets
set1: client1's set variable
set2: client2's set variable
await connect(<client>) // connects client to server
await disconnect(<client>) // disconnects client from server
await update(...<string[]>) // create and publish changes from client1 - requires client1 to be connected
await sync() // syncs changes to client2 - requires client2 to be connected
>
```
</details>
---
> **Note: in practice, the DHT queries related to the Dynamic Content's ID only need to be run initially. Afterward, a protocol meant for real-time replication with online collaborators can be used.**
---
<br/>
## Credits
Big thanks to [@autonome](https://github.com/autonome), [@SgtPooki](https://github.com/sgtpooki), and [@lidel](https://github.com/lidel) for help writing this article!
Also thanks to [@willscott](https://github.com/willscott) for answering all my DHT questions in [#libp2p-implementers](https://app.element.io/#/room/#libp2p-implementers:ipfs.io)!
<br/>
## Get Involved
Sound interesting? Get involved! Come [chat](https://matrix.to/#/#hldb:matrix.org)
Have a question? Create an [issue](https://github.com/tabcat/dynamic-content/issues)
[I](https://github.com/tabcat)'m implementing this in [tabcat/zzzync](https://github.com/tabcat/zzzync)
<br/>
## FAQ
**Q**: Why not just share an IPNS name between devices to update?
**A**: IPNS names are not built to handle concurrent writes and should not be extended to do so. They are signed, versioned documents that one device should be able to update. As shown here, they are essential for creating a system that can handle concurrent writes.
<br/>
**Q**: Isn't this going to be slow?
**A**: This design complements real-time replication by providing a general and reliable layer to fall back to. It adds two steps on top of resolving a CID: 1) the DHT provider query and 2) the IPNS name resolutions.
Developers must reason how to design replicas for efficient storage and replication over IPLD.
<br/>
**Q**: Provider Records do not support this use case. Could this affect DHT measurements?
**A**: If this use case became prevalent, it could affect DHT measurements. Using Provider Records this way would make it look like the content providers are offline because the PeerIDs are used only for IPNS.
<br/>
**Q**: Could IPNS and Provider Records be swapped out for alternatives and achieve the same goal?
**A**: Absolutely. The goal is to provide a general and reliable replication layer. Additionally, the more widespread the building blocks used, the more existing infrastructure can be leveraged.

View File

@@ -0,0 +1,72 @@
---
title: 'Recap: HTTP Gateways (þing 2023)'
description: 'A recap of the new HTTP Gateways track including summaries, links, and videos.'
author: Will Scott
date: 2023-05-30
permalink: '/2023-http-gateways-recap/'
header_image: '/http-gateways-recap.jpg'
tags:
- 'thing'
- 'þing'
- 'event'
- 'recap'
- 'track'
- 'http'
- 'gateways'
---
We had a new track at IPFS Thing last month: a forum focused on HTTP Gateways. As IPFS has scaled, the interactions between IFPS and the surrounding web has also increased. IPFS lives within the web, and as the [browser track](https://blog.ipfs.tech/2023-ipfs-thing-web-track/) noted, HTTP is deeply integrated with IPFS.
The Gateway track looked at the specific HTTP interface that IFPS as a server provides to web clients, and how the web clients make use of that interface. There are continuing pushes to evolve the interface `/ipfs/<cid>`, but we need to understand both how these primitives should be used by higher level APIs, and how to implement them.
A specific focus in this track was [Project Rhea](https://pl-strflt.notion.site/Project-Rhea-decentralized-IPFS-gateway-3d5906e7a0d84bea800d5920005dfea6), a cross-cutting project in Protocol Labs to decentralize the current gateways running at [ipfs.io](https://ipfs.io) to be hosted on decentralized infrastructure. This project has led to re-evaluation of the trust relationship between clients and gateways, and the hope that we can reduce the trust and increase the decentralization of gateways even further.
The talks in the track presented both different models for gateways, as well as implementation details for how components of Project Rhea are built.
In the rest of this post I'll provide links and brief color to the sessions in the track.
## What is Rhea?
[Will](https://wills.co.tt) kicked off the day with an overview of the architecture and goals of project Rhea.
@[youtube](0eJd2aqqSy8)
## IPFS Service Worker Gateways
[Adin](https://github.com/aschmahmann) demonstrated how web clients can reach origin IPFS hosts directly through protocols like webtransport and webRTC. The increasingly complete libp2p stack along with HTTP-compatible services like IPNI are bringing us to a reality where the HTTP gateways become less critical in bridging IPFS support directly to end web users.
@[youtube](MRIyWXy0ZRc)
## Web3 CDN Saturn accelerates IPFS & Filecoin retrievals
Alex Kinstler provided an overview of Saturn as a decentralized CDN and described the service it can provide as a basis for Rhea and as a platform that can host the ipfs.io IPFS gateway.
@[youtube](f9iUTLtPtKY)
## Self-hosting IPFS Gateway with bifrost-gateway
[Lidel](https://github.com/lidel) walked through the architecture of `bifrost-gateway`, a new IPFS implementation that acts as a 'trust gateway'. This component, built for Rhea, provides an HTTP gateway interface compatible with the current gateways which can fetch data from remote nodes via self-verifying car files.
@[youtube](xhJPz_efAQE)
## Introduction to Caboose
[Aarsh](https://github.com/aarshkshah1992/) dove into a 'thick client' for Saturn called Caboose that allows Saturn clients to make requests to close nodes in order to optimize performance of the CDN. In the Rhea use case, Caboose both allows for and improves fraud detection, as well as enabling faster switch-over in the case of a node going down.
@[youtube](z7a9E735l3Y)
## Testing Your IPFS Gateway Implementation: A Step-by-Step Guide
[Piotr](https://github.com/galargh) offered a framework for testing whether an IPFS gateway implementation works as expected. This conformance testing can improve our confidence that new implementations will be compatible with existing applications, and it is much less implementation-specific than previous testing frameworks.
@[youtube](PmIf77thO_c_)
## Live CDN Incentives and its Future
[Claudia](http://w.laudiacay.cool/) sent us off with a great dive into how incentives can be built for a retrieval market CDN. She described how existing primitives can be linked together to support a high performance decentralized CDN that is incentive-aligned with serving content well and quickly.
@[youtube](yrrAjR03TsU)
## Conclusion
I hope this overview of the HTTP Gateways track was helpful for those who couldn't attend IPFS Thing 2023 or for those who did attend but need a refresher. Next year we hope to take this new content track to the next level!

View File

@@ -0,0 +1,32 @@
---
title: Introducing Lassie - a retrieval client for IPFS and Filecoin
description: 'An overview of the content tracks that the community will convene around during IPFS Thing 2023.'
author: Brenda Lee
date: 2023-4-6
permalink: '/2023-introducing-lassie/'
header_image: '/Lassie.png'
tags:
- 'filecoin'
- 'retrieval'
---
Were excited to share that you can now use a simple retrieval client, named [Lassie](https://github.com/filecoin-project/lassie), to get your data from IPFS and Filecoin. Lassie makes it easy to fetch your data from both the IPFS and Filecoin Network - it will find and fetch content over the best retrieval protocols available.
For end users and clients, this means you can easily retrieve your content addressed data (using CIDs) from IPFS or Filecoin using the Lassie client, without having to run your own IPFS node or Filecoin node. Simply download the Lassie binary and start retrieving your data with the simple command -
```jsx
lassie fetch <your CID here>
```
In addition to using Lassie directly to retrieve end user content, application developers can leverage Lassie as a library to fetch content from IPFS and Filecoin directly from within an application. Currently, the Saturn Network (a Web3 CDN in Filecoins retrieval market) is using Lassie to retrieve data from IPFS and Filecoin.
Learn more about Lassie with these resources:
- Github: [https://github.com/filecoin-project/lassie](https://github.com/filecoin-project/lassie)
- Overview: [Basic Retrieval](https://docs.filecoin.io/basics/how-retrieval-works/basic-retrieval/)
- Technical documentation: [https://github.com/filecoin-project/lassie/tree/main/docs](https://github.com/filecoin-project/lassie/tree/main/docs)
- Ask questions: #retrieval-help in [Filecoin slack](https://www.notion.so/54fffa1b90ff4f6180586e79ff11ae17).
Special thanks to all who have paved the way building out prior retrieval clients ([w3rc](https://github.com/ipfs-shipyard/w3rc), [filclient](https://github.com/application-research/filclient)).
We encourage you to try this out and share with others who want to retrieve content from Filecoin or IPFS, and look forward to hearing your feedback. You can find us on [Github](https://github.com/filecoin-project/lassie) or #retrieval-help in [Filecoin slack](https://www.notion.so/54fffa1b90ff4f6180586e79ff11ae17).

View File

@@ -0,0 +1,30 @@
---
title: Introducing the IPFS Ecosystem Working Group
description: 'Nurturing a vibrant and sustainable IPFS ecosystem.'
author: The Ecosystem Working Group
date: 2023-09-05
permalink: '/2023-introducing-the-ecosystem-working-group/'
header_image: ''
tags:
- 'Ecosystem'
- 'Working Group'
---
Since its initial release over 9 years ago, IPFS has been stewarded by a variety of teams and individual contributors, both within and outside of Protocol Labs. More recently though, it has lacked a dedicated team focused on nothing other than the success of the IPFS ecosystem. It is with this reality in mind that we are excited to announce the formation of **the brand new IPFS Ecosystem Working Group!**
At launch, the IPFS Ecosystem WG consists of Protocol Labs contributors, but we are forming with the explicit purpose of spinning out into our own independent entity over the coming months. We believe that this working group and its autonomy will be critical in helping propel IPFS toward a better and even brighter future.
Initially, we have four core goals:
1. Foster a thriving ecosystem by advocating for IPFS,
2. Build bridges between IPFS and other ecosystems that could benefit from content addressing,
3. Grow the community and develop strong and durable community ownership of the IPFS project as a public good, and
4. Spin out from Protocol Labs into a self-sustaining organization that can support the IPFS community and build robust, effective governance for the protocol.
The future of IPFS requires greater degrees of decentralization, so you can expect other IPFS-focused teams to begin spinning out from Protocol Labs in the future as well.
As we continue to make progress towards these goals, we will provide updates and work in the open so as to keep the entire community in-the-loop with what were doing. A thriving ecosystem requires care and attention, and we believe that this new initiative and nimble team will be able to deliver on exactly that. But our work is only part of the story: community ownership means that your voice as IPFS users, operators, or contributors needs to be heard just as much as ours.
So if you have comments, questions, or concerns, then please join the discussion in the comments section below, via the [Ecosystem section of the IPFS Forums](https://discuss.ipfs.tech/c/communities/ecosystem/15), or the various chat links that follow together lets work on making IPFS thrive!
[Forums](https://discuss.ipfs.tech/c/communities/ecosystem/15) | [Discord](https://discord.com/channels/806902334369824788/1146489977174233098) | [Slack](https://filecoinproject.slack.com/archives/C05PGBP697E) | [Matrix](https://app.element.io/#/room/#ipfs-ecosystem:ipfs.io)

View File

@@ -0,0 +1,46 @@
---
title: Announcing IPFS Connect Istanbul 2023
description: 'Join the IPFS Community for a full day of workshops, lightning talks, and demos showcasing technology, tools, and innovative projects in the IPFS ecosystem.'
date: 2023-09-20
header_image: '/ipfsconnect2023.jpg'
tags:
- events
---
**IPFS Connect** is a community-run regional conference bringing together all of the builders and ecosystems that rely on and use IPFS as the most widely used decentralized content addressing protocol for files and data. This year's event is happening alongside Devconnect and LabWeek23 in **Istanbul, Turkey on November 16**. Join the IPFS Community for a full day of workshops, lightning talks, and demos showcasing technology, tools, and innovative projects in the IPFS ecosystem.
There are several opportunities for you to get involved with this event whether you're a business, organization, or individual.
## Present to a large community of builders at IPFS Connect
We're planning a full day of talks & workshops to prepare and inspire builders from around the globe that are attending Devconnect. This includes large ecosystems and communities that rely on and integrate with IPFS, as well as individual builders joining to hack at ETHGlobal Hackathon that starts the day after IPFS Connect.
### Presentation & Workshop Types
- Lightning talks & demos: short talks presenting your service, code, or app, designed to inspire, get signups, or talk about how you built it using IPFS.
- Workshops: we have 2 dedicated workshop spaces that will be running all day. Run a workshop session with attendees walking through your -solution so they're ready to hack the next day.
- Discussion: run a collaborative discussion on topics of interest - privacy, self hosting, devops best practices, decentralized data compliance, etc.
- Full talks: present in one of two theater spaces, with professional video recording. We want to hear about your technical how-to, user stories, and other talks that inspire and educate.
Speakers receive a free ticket to the event, as well as discount codes to invite their community
<a href="https://cfp.ipfsconnect.org/ipfsconnect-istanbul/cfp" class="cta-button">Submit a presentation or proposal</a>
## Attend and connect with other community members
Tickets for IPFS Connect are on sale now, so you can register and buy early bird tickets to get full access to the day's events and opportunities: [https://istanbul2023.ipfsconnect.org](https://istanbul2023.ipfsconnect.org)
<a href="https://istanbul2023.ipfsconnect.org" class="cta-button">Register today</a>
## Sponsor the event to reach your target audience
You can learn more about multiple sponsorship opportuntities [via the sponsor deck here.](https://docs.google.com/presentation/d/1UMbRP5pYHDL5TluzBWUuqlo6DB6F1G8bgUZhMa_w_Pc/view#slide=id.p)
**Developers:**
- Beginners to experts in IPFS
- Interested in adding IPFS to their Web3 stacks so that dapp front ends are decentralized and want to use off-chain files and data to build more usable apps
**Startups, DAOs, Sovereign Chains:**
- Select technology and service providers building on the IPFS tech stack combined with other Web3 components
- Exploring novel use cases for IPFS including provenance, computation, identity, and more
<a href="https://docs.google.com/presentation/d/1UMbRP5pYHDL5TluzBWUuqlo6DB6F1G8bgUZhMa_w_Pc/view#slide=id.p" class="cta-button">Learn more about sponsoring</a>

View File

@@ -0,0 +1,20 @@
---
title: IPFS is now on Bluesky!
description: 'Were excited to share that IPFS now has an official presence on Bluesky, a new decentralized social network that recently spun out of Twitter.'
author:
date: 2023-04-17
permalink: '/2023-ipfs-on-bluesky/'
header_image: '/2023-ipfs-on-bluesky.png'
tags:
- 'social media'
---
Were excited to share that [IPFS now has an official presence on Bluesky](https://staging.bsky.app/profile/ipfs.tech), a new decentralized social network that recently spun out of Twitter. Powered by the in-development [AT Protocol](https://atproto.com/), Bluesky aims to be a Twitter-like social media client that provides the benefits of interoperation, account portability, and algorithmic choice.
Up until now, IPFS has only had an official social media presence on Twitter. As the landscape of social media continues to change dramatically, we believe having a presence in more than just one place will be beneficial to the IPFS community, ecosystem, and brand.
We chose to join Bluesky because it shares many of the [same values and goals](https://specs.ipfs.tech/architecture/principles/) that the IPFS ecosystem has. Additionally, the [AT Protocol](https://atproto.com/) actively utilizes [IPLD](https://ipld.io/) and [content addressing](https://docs.ipfs.tech/concepts/how-ipfs-works/#subsystems-overview). The CEO of Bluesky, Jay Graber, even [gave a talk at IPFS Camp 2022 about Bluesky](https://www.youtube.com/watch?v=jGbBZbl-V8Y) that can be watched below, and her blog post about self-certifying protocols is referenced on the [IPFS Principles](https://specs.ipfs.tech/architecture/principles/#self-certifying-addressability) web page.
@[youtube](jGbBZbl-V8Y)
Its still very early days at Bluesky (and its still in private beta), but it has shown early promise in solving some of the critical problems the social web has been plagued with. If youre on Bluesky and want to keep up with IPFS there, [then give the new profile a follow @ipfs.tech](https://staging.bsky.app/profile/ipfs.tech)!

View File

@@ -0,0 +1,88 @@
---
title: 'Recap: Community & Governance (þing 2023)'
description: 'A recap of the Community & Governance track including summaries, links, and videos.'
author: Boris Mann and Robin Berjon
date: 2023-05-23
permalink: '/2023-ipfs-community-governance/'
header_image: '/community-governance-recap.jpg'
tags:
- 'thing'
- 'þing'
- 'event'
- 'recap'
- 'track'
- 'community'
- 'governance'
---
Governance and community are two ideas that vibe like they wouldn't live in the same part of town if their lives dependended on it. Community is warm, fun, and fuzzy if probably chaotic and occasionally infuriating, whereas governance sounds a lot more like flossing, something dry and painful that you pretend your project does to make the Serious People go away. But much like the predictable transition from misunderstanding to mutual respect in a buddy movie, these two were destined to form just the dynamic duo we need to take on insuperable odds. Community is the for and by of governance, and, quite frankly, the exercize of governance is messy, chaotic, and a crucible from which new, more resilient communities emerge.
The full-day Community & Governance track that brought us together at IPFS Þhing 2023 (which you can [watch in its entirety](https://www.youtube.com/playlist?list=PLuhRWgmPaHtTIFbOVO5YfXkoFg6wIGbBN)) had all of that energy and then some. We spent the time bouncing back and forth between how to prevent capture by lumbering megacorporations and how to gather friends for a nice community café, what's a data protection officer and better ways to herd cats, ways of protecting people from some of the worst content on the Internet while supporting censorship-resistance and how to support commons and community ownership. It was a ride and a delightful one too.
Perhaps the core issue that brought us together across the diverse presentations was that we want to learn from the mistakes of the past and organize the community so that we can bring about a better world. It's not the worst plan.
## Memory in Uncertainty
"*Is that an elaborate way of saying everything's fucked?*" That might not be your typical audience question but then this wasn't your typical presentation either (or your typical audience, for that matter).
Cade Diehm is one of the brains behind [*The New Design Congress*](https://newdesigncongress.org/) (NDC), a research organization that practices "*ethical red teaming*" to identify issues with sociotechnical systems. [WebRecorder](https://webrecorder.net/) and [the Filecoin Foundation](https://fil.org/) hired NDC to take an in-depth look at web archiving (on IPFS) to help identify problems. Cade came to IPFS þing to udpate the community on his findings, which are captured in the [Memory in Uncertainty: Web Preservation in the Polycrisis](https://members.newdesigncongress.org/memory-in-uncertainty-web-preservation-in-the-polycrisis/) report.
"*The answer,*" Cade told us, "*is: 'kind of.'*" He gave a wide-ranging presentation ranging over the dangers of decentralized technology, the complexity of archives, the challenges presented by the potential weaponization of data, and much more. It provided a powerful call to take the impact of our tech seriously and to keep in mind that tech can only be ethical if it is governed by the people it impacts. Cade concluded with a set of tools to help avoid bad outcomes, because "*not everything is screwed.*"
@[youtube](TdiQGXSZmCk)
## Community Organizing
For all that remote work and online collaboration have improved, it's hard to have a strong and durable sense of community without meeting people in the flesh now and then. Thankfully, we have many gatherings to look forward to this year!
[Vukasin Vukoje](https://twitter.com/wukoje) made a surprise announcement of a new event series: Compute Camp. Details will be released more fully soon, but the first edition of this series dedicated specifically to distributed compute will take place later this year in Belgrade, Serbia. If you care more about doing things to data and less about where it's stored, this might just be the place for you.
Yuni Graham and Niki Gokani walked us through the organization of IPFS Camp, which will take place in November in Bangalore. We worked together to figure out what the best structure and content for the event would be. They're looking for volunteers to be part of the content planning work — please consider reaching out if you're interted!
@[youtube](U5u54jwOg6k)
And if what you're looking for is a more local IPFS-focused event, why not run your own? Yuni Graham and Nicole Schafer presented IPFS + Friends Café, a collection of local community gatherings to help develop an IPFS community on the ground, around the world. If you're iunterested, they might be able to sponsor such an event as well as assist with logistics and finding some speakers. It would be wonderful to see more local evangelism!
@[youtube](FII_9VTgDy8)
## Cat Herding
A growing community is a blessing, but keeping up with everything that is happening can become overwhelming. Several sessions provided us with both updates and tools to stay on top of future updates.
Henrique Dias (aka [@hacdias](https://twitter.com/hacdias)) walked us through [specs.ipfs.tech](https://specs.ipfs.tech/), the hot new place to get IPFS specifications. Not all of the IFPS specs have been moved there yet, but they're in the process of being ported over and everything new will be on the specs site from the get-go. This site is intended to grow into the one-stop-shop reference for IPFS implementations, ideally reaching the point at which one could produce an IPFS implementation from scratch using those documents alone (along with the emerging test suite, of course).
@[youtube](vQVnjEIPuCE)
At last year's þing in Iceland, the IPIP (IPFS Improvement Process) process was announced. The indefatigable @lidel walked us through all of the IPIP work that has happened since, and it's a lot! Initially announced as a lightning talk, this was more of a twenty minute presentation at lightning speed.
Keep in mind that this process is open to anyone in the community (and if you're reading this that means *you*). There is an [IPIP Pipeline GitHub project](https://github.com/orgs/ipfs/projects/19) that maintains an up-to-date status of all IPIPs, and the IPIPs get discussed on the [IPFS Implementers Working Group](https://lu.ma/ipfs-implementers). More generally, the [IPFS Community Calendar](https://lu.ma/ipfs) keeps track of the various meetings and events in which the evolution of the IPFS stack gets discussed.
@[youtube](WcHlV6sQuDI)
But then again, specs are only one corner of IPFS, and IPFS one corner of a bigger family of technologies. Staying on top of everything that is happening in *\[gestures vaguely around]* this space remains daunting. One novel tool that is already helping people get a clearer sense of what's happening (and that you can use as well) is [Starmap](https://starmap.site/). The core principle of Starmap is very simple: by structuring your GitHub issues according to very simple conventions, you can create a nested tree of issues that spans any arbitrary set of repositories and see all of those organized in a single Starmap.
The idea is that people should be organizing and coordinating code whichever way they see fit, but it should be possible to obtain an overview of the status of a progress across all of its components nevertheless. One example is the [Kubo/Boxo 2023Q2/Q3 items](https://starmap.site/roadmap/github.com/ipfs/kubo/issues/9817#list).
Bastien Dehaynin from Fission provided us with a clear and exciting overview and demo of the system. Several people in the room were already users, and there was definite interest in getting a Starmap for specs.
@[youtube](_HoLDQreF28)
## Governance
In order to keep IPFS and its broader ecosystem pushing in a direction that benefits all people, to support impactful collective action and ownership, and to avoid it being captured by larger players, we need to deploy matching governance capabilities. Your friendly here authors, Boris Mann and Robin Berjon, ran a workshop on "What Should We Governance?" with the goal of surfacing risks and pain points regarding governance of the IPFS ecosystem. This produced a lot of very valuable input, yet we feel like we have barely scratched the surface.
@[youtube](svqlHO3K_RQ)
Our dynamic duo then split their color-coordinated purple outfits, first with Boris discussing the allocation of funding for code and other community work, and suggesting that it would be great to use Starmap to find which parts of a project are most in need of funding.
@[youtube](PysiACKo1dI)
And then Robin talked about the ongoing work in the [Decent Data Compliance WG](https://github.com/DDC-WG) where parties from across the decentralized world are working together to figure out how to manage "[bad bits](https://badbits.dwebops.pub/)", how to protect operators from serving some of the worst content on the Internet (or simply things they don't want to host), and how to make sure that people's privacy rights are respected. There's a lot of work to be done, but it's heartening to see that people are taking these issues seriously.
@[youtube](bIlji91KEFQ)
## Where Next?
The day made it clear that there is strong interest in community and governance in the IPFS universe, and you can expect to hear a lot more on this side of things. While different aspects of these concerns have places where people can gather to discuss them (as seen in the links sprinkled above), overall coordination and cooperation around governance in the decent(ralized) world remains limited. We joked that we might need a "Working Group Working Group" to provide lightweight support for all the community working groups that keep emerging and help them work together. But the feedback was that it might not actually be such a joke of an idea.
Stay tuned!

View File

@@ -0,0 +1,92 @@
---
title: 'Recap: Content Routing (þing 2023)'
description: 'A recap of the Content Routing track including summaries, links, and videos.'
author: Masih Derkani
date: 2023-05-15
permalink: '/2023-ipfs-thing-content-routing-track/'
header_image: '/ipfs-thing-2023-recap/content-routing/content-routing-recap-slides.png'
tags:
- 'thing'
- 'þing'
- 'event'
- 'recap'
- 'track'
- 'content'
- 'routing'
---
The term "content" is ubiquitous in discussions about knowledge sharing, regardless of the platform used. IPFS takes this term to a new level by defining content as an immutable piece of information, identified by a cryptographic hash that defines its identity. Any change in the information results in a different identity, making the content immutable. This property has a subtle yet powerful advantage: a receiver of a piece of information can verify its authenticity based on its identifier. This simple concept leads to an important question: how can one locate shared content using its identity? 🤔 This is where "Content Routing" comes in.
Content Routing is the crucial first step in exchanging content within the IPFS network. Once a Content Identifier (CID) is generated from a piece of information, Content Routing enables the information to be both discoverable and discovered. In other words, it involves telling the network, "Hey, I have content, and here is its CID," as well as answering peer questions such as "Who has this CID?".
This seemingly simple yet paramount functionality enables the network to share immutable and verifiable pieces of information. Since the inception of IPFS as a protocol, Content Routing has taken various forms and utilized several techniques to fulfill its promise of sharing knowledge. It remains an essential component of the IPFS ecosystem, as evidenced by its dedicated track at IPFS þing 2023 in Brussels, Belgium, last month.
At IPFS þing 2022 a year ago, Content Routing was divided into two tracks: [Privacy](https://www.youtube.com/watch?v=VLU44qtXypE&list=PLuhRWgmPaHtTegfLTVFYtTtqTKQEtDvxW) and [Performance](https://www.youtube.com/watch?v=AWbobt9oHZ0&list=PLuhRWgmPaHtSF3oIY3TzrM-Nq5IU_RTXb). This year, both tracks were combined into one glorious Content Routing track that covered both areas. We had the privilege of hosting talks from community leaders who discussed the impressive improvements in performance and scalability of content routing systems, the privacy preservation techniques that cut across different systems, as well as community call-outs and discussions on how to get involved and build a better decentralized web together.
The track offered a comprehensive view of the content routing evolution since the inception of IPFS and showcased the latest advancements in the IPFS ecosystem. It provided an overview of the [InterPlanetary Network Indexer (IPNI)](https://github.com/ipni) and explained how it enables the mass publication and lookup of content across hundreds of billions of CIDs. The latest developments in reader privacy preservation, a mechanism that allows private lookups of content on both the IPFS DHT and IPNI, were also presented.
The rest of this blog post offers highlights, links, and a brief commentary on the talks.
The full playlist of talks at the IPFS þing 2023 Content Routing track can be found [here](https://www.youtube.com/watch?v=oe7fjOl-q0s&list=PLuhRWgmPaHtRBWV3SvInC5ATS8aKV3lsW). To learn more about Content Routing, check out the previous tracks at [IPFS Camp 2022](https://www.youtube.com/watch?v=7nb5oEpURCU&list=PLuhRWgmPaHtRqhFZ-CAstJ0RIq7Vs-4eO) and the [IPFS YouTube channel](https://www.youtube.com/@IPFSbot/playlists).
## Content Routing Track Introduction by Masih Derkani
[Masih](https://derkani.org/) presented an overview of Content Routing as a concept, its evolution over time, along with the evolutionary trends of content routing in the IPFS ecosystem. The talk illustrated what routing content in the IPFS network looks like today and explained how the mesh of content providers of different sizes interconnects. It also showcased the sub-systems that enable content routing to "just work", regardless of where the data resides.
@[youtube](oe7fjOl-q0s)
## Opening the DHT to large content providers by Guillaume Michel
How does a 1M x reduction in opened connections sound? That's right, providing data via the DHT is becoming much more efficient for large content providers thanks to "regions". [Gui](https://github.com/guillaumemichel) presented the latest research on how the DHT key space can be divided across regions to reduce the number of connections as well as messages sent to make content discoverable via the IPFS DHT.
@[youtube](bXaL64fp55c)
## IPNI: the InterPlanetary Network Indexer by Masih Derkani
Talking of large content providers, IPNI, the InterPlanetary Network Indexer, is an alternative routing system designed from scratch to provide content by the bucket load. [Masih](https://derkani.org/) presented how IPNI achieves this by betting on storage becoming cheaper and using replicas to reduce the need for trust to provide single hop lookup for trillions of CIDs. He explained how IPNI handles changes in the subset of CIDs advertised by content providers in a super-efficient protocol. IPNI as a concept has been around for about a year; it is the same protocol that makes FileCoin content discoverable over the IPFS network. As a protocol, it has now grown large enough to deserve its own "InterPlanetary" acronym and a growing set of [specifications](https://github.com/ipni/specs).
@[youtube](_EDJXeDtcX4)
## cid.contact: one year on by Masih Derkani
Having made the distinction between "protocol" and "implementation", [Masih](https://derkani.org/) presented a second talk on [`cid.contact`](https://cid.contact), the largest most mature IPNI cluster. `cid.contact` is built into [Kubo](https://github.com/ipfs/kubo) as a default routing system since version [`0.18.0`](https://github.com/ipfs/kubo/releases/tag/v0.18.0) and is the content router of choice for [Lassie](https://youtu.be/d5SzSm8NkUU) used by [Rhea](https://youtu.be/p89i9_AskIw). The talk covered the latest architecture of `cid.contact` and the newest features, such as cascading lookup over IPFS DHT and BitSwap, that make it a one-stop content router, tuned to find content no matter where it might be. `cid.contact` has ingested over 1.3 trillion CIDs from hundreds of providers, and just turned one this April. Happy 1st birthday! 🎂
@[youtube](CPlOdNqJ8og)
## IPFS Content Routing Workgroup, an introduction by Torfinn Olsen
Ever wondered where the content routers meet? 🧙 Look no further; the Content Routing Workgroup is it! [Torfinn](https://github.com/TorfinnOlsen) provided an overview of what the workgroup aims for, how community decisions are made, and how things get prioritized in the pipeline. He presented the roadmap ahead for the workgroup and invited the community to join. The workgroup meetings are public and open to all. You can find recordings of the previous meetups [here](https://www.youtube.com/watch?v=LsCH8xw3__c&list=PLuhRWgmPaHtRP5lVouK_eqhC98xaej6Px). Whether it's the next big idea you'd like to propose or just to observe what content routers get up to all day, you are most welcome.
@[youtube](MagS8ly_YXE)
## DHT ~~Double Hashing~~ Reader Privacy Updates & Migration Plan by Yiannis Psaras
It was at the first IPFS þing in Reykjavík where [Gui](https://github.com/guillaumemichel) presented the idea of [Double Hashing](https://www.youtube.com/watch?v=ZPIDU1-JnVc) in the context of Content Routing. Yep; we love hashes so much we're gonna do it twice! In this technique, rather than looking up a CID straight up, it is hashed again and its "double-hashed" value is the key that's used for lookup. In turn, the lookup results are then returned in encrypted form using the original CID as the encryption key. Pretty nifty, right?! Gui presented two follow-up talks on this at IPFS Camp 2022 further [explaining the core idea](https://youtu.be/VBlx-VvIZqU) and what [transitioning to it would mean for content routing](https://youtu.be/m-6_VZ8e1tk). At Brussels, [Yiannis](https://github.com/yiannisbot) walked us through the latest updates in the rollout of ~~Double Hashing~~ Reader Privacy to the IPFS DHT, one of the routing systems in use today. The initial phase of privacy preservation focuses on the "reader" side, where an external observer cannot know what a user is looking up without knowing the original CID. Later work will build on this to expand the privacy benefits to the "writer" side, i.e., content providers.
@[youtube](FP4kKemco4w)
## Double Hashing in IPNI: Reader Privacy at scale by Ivan Schasny
Privacy preservation is a quality that cuts right across routing systems. ✂️ This means no matter how the content is advertised or found we _want to_ preserve the user's privacy. In this talk [Ivan](https://github.com/ischasny) walked us through what this means for IPNI and how it is changing the architecture of `cid.contact` to incorporate reader privacy at its very core: `cid.contact` is moving to _only_ store encrypted provider records which means even the servers do not know what CIDs are being looked up. He expanded on how this big change is being rolled out garcefully, in stages and what's to come in the near future. Watch the [`#ipni` channel on FileCoin Slack](https://filecoinproject.slack.com/archives/C02T827T9N0) for the latest updates.
@[youtube](Q46zJ_mai2c)
## Private data: state of the art by Ian Preston
Taking things one step further on the privacy front, [Ian](https://peergos.org/about#ian_) and his team have been busy building privacy deep at the heart of [Peergos](https://peergos.org/). How does it work? The talk takes a deep dive into the Peergos architecture and how it utilizes `cryptree+`, BATs, and Capabilities to enable post-quantum ciphertext-level access control with improved metadata preservation and better performance. Ian walked us through the challenges they faced, such as garbage collection, and how the team overcame them to make application sandboxing a piece of cake. 🍰 As for the icing, check out Ian's slides shared right from Peergos [here](https://peergos.net/public/demo/talks/2023/ipfs-thing/private-data/web/index.html?open=true).
@[youtube](HVyrVUI2-RA)
## Content Advertisement Mirroring by Andrew Gillis
As the adoption of IPNI as an alternative content routing protocol continues to grow, so does the need for scaling. 🚀 At IPFS Camp 2022, [Andrew](https://github.com/gammazero) presented how IPNI is [scaling the content routing](https://youtu.be/qaCB0UKqwAk). Building on top of previous work, this talk covered how the replication of content advertisements from providers is making ingestion (and re-ingestion) 5X faster. This means new IPNI instances can use alternative sources to build up their index records with much higher velocity, moving us closer to a federated mesh of IPNI instances that continue to maintain lookup latency in orders of a few milliseconds at 10^15 scale!
The talk was followed by a discussion on a set of open questions as we scale the IPNI network. Take a look and get involved right from where we left off at the next Content Routing Workgroup meeting!
@[youtube](6l0i8DjhpLg)
## A Massive Shout-out
It's great to see the IPFS community coming together and celebrating the latest advancements in the field. A big thank you to all who attended the track at Brussels and to the speakers who presented and helped generate questions. Last but not least, a massive shout-out to the community that tirelessly drives the vision (a better web for all) forward. 🙇
See you on the decentralised web! ✊

View File

@@ -0,0 +1,58 @@
---
title: IPFS þing 2023 Recap
description: 'Highlights, photos, and videos from the annual gathering of the IPFS implementers community.'
author:
date: 2023-05-04
permalink: '/2023-ipfs-thing-recap/'
header_image: '/ipfs-thing-2023-recap/ipfs-2023-recap-featured-image.jpg'
tags:
- 'thing'
- 'þing'
- 'event'
- 'recap'
---
The IPFS implementers community recently came together in Brussels, Belgium for the second year of IPFS þing, an annual gathering dedicated to advancing IPFS implementation. With 12 tracks and over 75 talks, demos, and sessions, the 5-day summit that occurred in April 2023 was a showcase of recent advances across IPFS, a forum for sharing needs from the protocol, and an opportunity to chart new directions for the future of IPFS.
![](../assets/ipfs-thing-2023-recap/group-collage.jpg)
Over 130 participants joined for a collection of talks, workshops, discussion circles, hacking time, and many many many hallway conversations. Here are a few memorable highlights:
* **Operators and Deployments:** In this track, the people putting IPFS into production gathered to share their architectures, best practices, and war stories. We laughed. We cried. We looked at a LOT of graphs. Also, this track had some crazy performance numbers we learned that IPFS can be _really_ fast. ([YouTube playlist](https://www.youtube.com/playlist?list=PLuhRWgmPaHtTYOY5l8nehP_Vt6Ek-svrp))
* **Data Transfer:** Since the last IPFS þing, a few different teams that needed faster transfer speeds came together to form the [Move The Bytes Working Group](https://mtngs.io/ipfs/move-the-bytes-wg/). This track included an update on how things are going with the initiative and showed how IPFS can be 10x faster at getting your stuff than Web2! ([YouTube playlist](https://www.youtube.com/playlist?list=PLuhRWgmPaHtS6WBDGK8oxcBHA6ILKatVk))
* **Community and Governance:** IPFS is a public good. It's important that it stays that way, that it resists capture, and that it has great governance by (and for) its community. This track charted a course to broad community participation in everything from core protocol standards to planning the next IPFS Camp (no spoilers youll have to watch the talks to know when and where its going to be!). ([YouTube playlist](https://www.youtube.com/playlist?list=PLuhRWgmPaHtTIFbOVO5YfXkoFg6wIGbBN))
* **IPFS Gateways:** Gateways (servers that translate between HTTP and IPFS) are the biggest onramp to the IPFS network,but theyre also the biggest single point of centralization. Huge changes are happening in this area, so if youre a gateway user or if your service links to it, then be sure to watch these videos. ([YouTube playlist](https://www.youtube.com/playlist?list=PLuhRWgmPaHtTapMgLW7rRh92Tk8u7wip5))
* **Integrating IPFS:** IPFS is used in environments ranging from tiny IoT sensor platforms to mobile devices to satellites in space. In this track, participants learned how IPFS is being implemented in space, on mobile devices running iOS or Android, and in higher level application constructs like “accounts”! We even learned what an “ipfs run” command would look like for distributed functions. ([YouTube playlist](https://www.youtube.com/playlist?list=PLuhRWgmPaHtTI0MS6ZjSJjBxZp7rcjSS_))
* **Content Routing:** The first step to exchange “content” over the IPFS network is to either 1) find the given content by using a Content ID (CID), or 2) publish the given content by making its CID known. The Content Routing track covered a holistic view of the content routing evolution since the inception of IPFS and showcased the latest advancements in this simple yet paramount operation in the IPFS ecosystem. This track provided an overview of the InterPlanetary Network Indexer (IPNI), and expanded on how it enables mass publication and lookup of content across hundreds of billions of CIDs. The latest developments in reader privacy preservation, a mechanism that enables private lookups of content both on the IPFS DHT and IPNI, was also presented. ([YouTube playlist](https://www.youtube.com/playlist?list=PLuhRWgmPaHtRBWV3SvInC5ATS8aKV3lsW))
![](../assets/ipfs-thing-2023-recap/ipfs-thing-1.png)
IPFS þing isnt just about getting things done though… its also about doing things together. Because we were in the beating bureaucratic heart of the European Union, we had to ~~flaunt the rules~~ pay respect to the culture and history of the region by visiting the [Atomium](https://atomium.be/home/Index) for dinner on one night and the [Comics Art Museum](https://www.comicscenter.net/en/home) on another. We also held a game night featuring IPFS trivia that you will never be able to guess the answers to, but you may get a chance soon by following IPFS on [Twitter (@ipfs)](https://twitter.com/ipfs) or [Bluesky (@ipfs.tech)](https://staging.bsky.app/profile/ipfs.tech)
![](../assets/ipfs-thing-2023-recap/atomium-collage.jpg)
The closing session of IPFS þing was kicked off by a rousing call to action from [Danny OBrien](https://twitter.com/mala) highlighting the importance of daily use of IPFS software within the community. This was followed by a group retrospective on the event itself run by IPFS inventor Juan Benet, collecting feedback in real time from all attendees as input into the next one.
![](../assets/ipfs-thing-2023-recap/danny-juan-1.jpg)
The event would not have been possible without the dedication of our awesome track leaders, the 75+ speakers and sessions, the 5 IPFS þing Scholars who brought their unique perspectives and experiences to the event, and of course, everyone who traveled from over 30 countries to participate. Thank you to our incredible community for making IPFS þing 2023 an amazing experience, and see you next time!
Check out the full list of talks on the [IPFS YouTube channel](https://www.youtube.com/@IPFSbot/playlists). You can also head directly to each tracks video playlist:
* [Opening Keynotes](https://www.youtube.com/watch?v=G2hlQqvjE-Y&list=PLuhRWgmPaHtRnO5G2EF0RxYebcQzLDf5F)
* [Measuring IPFS](https://www.youtube.com/watch?v=O8Nk1FN04Q8&list=PLuhRWgmPaHtQkkbiq-PbIkt9_S2NjJz6x)
* [IPFS Deployments & Operators](https://www.youtube.com/watch?v=bILa9sPpBMs&list=PLuhRWgmPaHtTYOY5l8nehP_Vt6Ek-svrp)
* [Data Transfer](https://www.youtube.com/watch?v=13_zr--akhs&list=PLuhRWgmPaHtS6WBDGK8oxcBHA6ILKatVk)
* [IPFS on the Web](https://www.youtube.com/watch?v=dn8PssXkRbY&list=PLuhRWgmPaHtQ-TO65P62tqfUM85HCIqSj)
* [Interplanetary Databases](https://www.youtube.com/watch?v=tjSuNmCTnyU&list=PLuhRWgmPaHtTO8hr2CYiJPTSe7wybW_op)
* [Content Routing](https://www.youtube.com/watch?v=oe7fjOl-q0s&list=PLuhRWgmPaHtRBWV3SvInC5ATS8aKV3lsW)
* [HTTP Gateways](https://www.youtube.com/watch?v=p89i9_AskIw&list=PLuhRWgmPaHtTapMgLW7rRh92Tk8u7wip5)
* [Decentralized Compute & AI](https://www.youtube.com/watch?v=LK9QjOJIPkQ&list=PLuhRWgmPaHtQ_lKtbTR-vIW1LYuTjcaPw)
* [Integrating IPFS](https://www.youtube.com/watch?v=drvFcbykHYY&list=PLuhRWgmPaHtTI0MS6ZjSJjBxZp7rcjSS_)
* [Community & Governance](https://www.youtube.com/watch?v=U2qvvQxIdws&list=PLuhRWgmPaHtTIFbOVO5YfXkoFg6wIGbBN)
Subscribe to the [IPFS Community Calendar](https://lu.ma/ipfs) to be the first to know about both online and in-person events, including pre-registration for our community-wide [IPFS Camp](https://lu.ma/ipfscamp23-prereg) in autumn 2023!
![](../assets/ipfs-thing-2023-recap/speaker-collage.jpg)
See you there! 🚀

View File

@@ -0,0 +1,122 @@
---
title: 'Recap: IPFS on the Web (þing 2023)'
description: 'Track recap with links and serious analysis for the IPFS on the Web track at IPFS þing 2023'
author: Dietrich Ayala
date: 2023-05-10
permalink: '/2023-ipfs-thing-web-track/'
header_image: '/ipfs-thing-2023-recap/ipfs-on-the-web-featured-image-2.jpg'
tags:
- 'thing'
- 'þing'
- 'event'
- 'recap'
- 'track'
- 'web'
---
The world wide web is both the biggest deployment vector and least controllable surface for IPFS. There are opportunities and challenges with bringing IPFS support to rendering engines, browsers, gateway-served content, web apps, and browser extensions. It is these unique dynamics that motivated us to organize a dedicated content track for them at the annual gathering of IPFS implementers known as IPFS Thing.
To catch you up to speed, the [*Browsers and the Web Platform* track at IPFS Thing 2022](https://2022.ipfs-thing.io/schedule/#Browsers-and-The-Web-Platform) was only a half day long with a small group of people. It had browser and gateway progress updates, some alternate visions of how a content-addressed web could work on desktop and mobile, and also some straight-up "stuff is hard still" talks. You can listen to [all of these talks](https://www.youtube.com/watch?v=_DGVa2CJjIc&list=PLuhRWgmPaHtTsL76nt_A6CPDe6lW7l6Sz) on YouTube.
A few months later at [IPFS *Camp* 2022](https://2022.ipfs.camp/#Browsers-Platforms) we had a similar track with many more people and the tone changed a bit — we saw more working code, and even features shipped in products that were represented, and we covered platforms outside of web, like native mobile and space. You can watch the [full playlist of videos](https://www.youtube.com/watch?v=HhCHvuP5IJo&list=PLuhRWgmPaHtQohNbRjFJDS70WoElZ8ep5) on YouTube as well.
This brings us to IPFS *Thing* 2023 that occurred last month. I had the privilege of broadening the lens even further than we experienced at the previous two gatherings. We examined the opportunities, challenges, products, protocols, and experiments in the intersection of these two distinct paradigms of HTTP and IPFS. This area of the IPFS ecosystem is changing so rapidly that [HTTP Gateways to the IPFS network](https://www.youtube.com/watch?v=p89i9_AskIw&list=PLuhRWgmPaHtTapMgLW7rRh92Tk8u7wip5) had a whole track to itself this year. This gave the Web track more room to move, so we were able to cover everything from naming systems to publishing pipelines and JS toolkits and more.
In the rest of this blog post you'll find highlights, links, and commentary from the track lead (that's me, Dietrich) on why these talks were selected and why I think they're helping make a better web for us all.
You can find the full video playlist for the IPFS on the Web track [here](https://www.youtube.com/watch?v=dn8PssXkRbY&list=PLuhRWgmPaHtQ-TO65P62tqfUM85HCIqSj), and I'll link each below as well.
## IPFS on the Web in 2023 (so far) - Dietrich Ayala
[Dietrich](https://metafluff.com/) gave a short overview of various initiatives and collaboration projects of the Browsers, Platforms & Standards team at Protocol Labs. It was a peek into the latest IPFS features in Brave Browser, early work into Chromium native support for IPFS, and various other work those weirdos are pushing forward.
@[youtube](dn8PssXkRbY)
## What Is The Web? - Robin Berjon
Good morning. Have you had a coffee yet? Ok great because you're going to need it for this talk. [Robin](https://berjon.com/) sets the perspective for the day by asking us one of the most difficult questions: "What is the web, actually?" It's big. It's special. We complain about it. But we need to have language to describe what it is in order to talk about how it could grow and change. Warning: This talk begins at a point in history over 100 years ago!
@[youtube](s878bm15mrk)
## A better web: secure, private, p2p apps with user-owned data and identity - Ian Preston
[Peergos](https://peergos.org/) has been building *your* private space online for half a decade, and it shows: [Ian](https://peergos.org/about#ian_) and team have built a mostly exilfiltration-proof application platform on IPFS which works in the browsers of today. The idea of web content that can't phone home might make you say "hmmm", but it could just be the antidote to the surveillance-is-required-to-pay-the-bills version of the web we have today.
@[youtube](mSElk2jcFqY)
## WNFS: Versioned and Encrypted Data on IPFS - Philipp Krüger
It's the web. It's p2p. It's files. It's private. It's WNFS! [Philipp](https://irreactive.com/) walks us through how the WebNative Filesystem works, and how it works in browsers specifically. It's not easy, but none of us signed up for easy. That being said, you *can* sign up to join the [WNFS Working Group](https://github.com/wnfs-wg) today after watching this talk.
@[youtube](LBMyRp4Ywew)
## Content Based Addressing and the Web Security Model - Fabrice Desré
Speaking of hard... have you ever decided that the problem you'd like to fix in the world is Google and Apple's stranglehold on our daily digital lives? That's what [Fabrice](https://github.com/fabricedesre) does with [Capyloon](https://capyloon.org/), a complete web-based mobile operating system. When you control the OS, you ~~control the world~~ can do veeeerrrry interesting things. Fabrice gave us a deep dive into the [origin security model](https://www.rfc-editor.org/rfc/rfc6454) today and how radically different it can be in a content-addressed world.
@[youtube](H_1JVGDnctI)
## Hello Helia - achingbrain
Bye Felicia. Hello Helia. Thanks [achingbrain](https://github.com/achingbrain). Welcome to a new way to IPFS in JavaScript, on the web, or on the server... finally with nice things like DHT support. It should've been called banana. But we like it anyway.
@[youtube](T_FlhkLSgH8)
## JavaScript performance - how to wring the most out of your Helia deployment - achingbrain
Hey it's [achingbrain](https://github.com/achingbrain) again, this time with a deep dive into Helia performance and optimizing for the environment you're deploying to. JavaScript is the *fastest* language for the environments it lives in which other languages can't even exist, so take that.
@[youtube](zPeLYosZ3Ak)
## Connecting everything, everywhere, all at once with libp2p - Prithvi Shahi
First, the person who gives this talk is not Prithvi, it's Max. Second, Prithvi is a genius for thinking up a talk like this. Please enjoy all the transports everywhere all of the time.
@[youtube](zPeLYosZ3Ak)
## The Incredible Benefits of libp2p + HTTP - Marten Seemann & Marco Munizaga
In the ageless words of Gandalf, the headmaster of Hogwarts: "be like water". Or something like that. Anyways, if you're familiar with the challenge of writing code for the web that has to pretend that it's not actually stuck in a web browser tab, but instead is actually connected to a global transport-agnostic peer-to-peer network, then you'll understand how important it is to make friends with your environment and use the *!#? out of what it gives you. This talk from Marten and Marco shows you how the changing landscape of the network layer of the web platform is allowing libp2p to operate unfettered while still stuck in a tab in a window in a browser on your computer on earth.
@[youtube](Ixyo1G2tJZE)
## The Name Name Service - Blaine Cook
NNS.. NNS.. NNS... goes the beat. Now that you're bouncing your head, follow along as Blaine Cook shares his vision of how we solve one of the three hardest problems in computer science: how to make the perfect Hollandaise. Er, no that wasn't it. NAMING! Yes, that was it. The Name Name Service is a breathtakingly simple approach to flexible, veriable, integrate-able, old-world-compatible, and human-readable names for our digital things. Thank you Blaine.
@[youtube](CHiCEd36KtI)
## Building decentralized websites on IPFS - Ryan Shahine
Decentralization is cool but it's so hard and you have to be super technical to even... wait, what? I can just... drag and drop? What I see is what I get?! With Portrait, yes. Ryan Shahine shares Portrait's slick and simple site builder for publishing your sites to IPFS. No-code sites for non-technical creators is such a wonderful thing to behold.
@[youtube](TeFAHmzvIdg)
## ODD.js, a technical overview - icidasset
Oddly enough, DID you know UCAN build decentralized applications with that WNFS stuff we heard about earlier? In this preview of the final emergent form of a bunch of Fission's tools coming together like Voltron, you'll meet Odd.js - a toolkit for building applications that has all of the core bits you need, from identity to naming to storage to security. The only odd thing is that we didn't have this yet.
@[youtube](ByQbY3lNAck)
## IPFS native frontend development using Importmaps - Dilip Shukla
Imagine if your CDN was a massive cooperative global distributed network that 1) wasn't a single company, and 2) pushed alllll the way up into your client-side build tooling, making your pages immediately available. WHAT YOUR DOM WAS ACTUALLY A DAG... ok maybe that's too far, but what Lagom is doing is finding exactly what the right balance is. In the future we'll look back and wonder why we didn't do this from the begining...
@[youtube](4HY_7DxScMo)
## Explorations into Decentralized Publishing - David Justice
The Browsers, Platforms and Standards team wants to share our thoughts early and often. And we want to do it on IPFS. There are all kinds of ways to do it, but each have their trade-offs... but it's not clear what those are until you go make a bunch of mistakes. So we're going to make them for you. David Justice is working on approaches for building our team blog with some friends at Trigram and shares what the first stab at it looks like.
@[youtube](fn5QNvRXMIo)
## Thank you!
Thanks to all the speakers for the day and also to the rad people who joined and asked great questions.
Until our next event, come hang out in our superbridged megachannel:
* #browsers-and-platforms on Filecoin Slack ([join](https://filecoin.io/slack))
* #browsers-and-standards on Element/Matrix ([join](https://matrix.to/#/#browsers-and-standards:ipfs.io))
* #browsers-and-standards on IPFS Discord ([join](https://discord.gg/ipfs))

View File

@@ -0,0 +1,14 @@
---
title: Ecosystem content
type: Ecosystem content
sitemap:
exclude: true
data:
- title: 'libp2p at IPFS þing 2023 Recap'
date: 2023-05-11
publish_date:
card_image: /blog-post-placeholder.png
path: https://blog.libp2p.io/2023-libp2p-IPFS-Thing-recap/
tags:
- libp2p
---

View File

@@ -0,0 +1,20 @@
---
title: Take the IPFS Events 2024 Survey Today!
description: 'We need your help! Make your voice heard about upcoming IPFS Events.'
date: 2023-09-18
tags:
- survey
- events
---
## **We need your help! Make your voice heard about upcoming IPFS Events 📆**
Each year, several different IPFS events and gatherings are held at different locations to make space for the community to socialize, learn, and grow together. Many of you have been eagerly awaiting more information about when the next events will be, and were excited to begin sharing some of those details with you starting today!
**IPFS Camp** will take place in **Spring 2024**! IPFS Camp is a large in-person gathering for the entire IPFS community: devs, operators, implementers, researchers and you!
**IPFS Thing** is targeted for **Fall 2024**! IPFS Thing is a week-long gathering for the IPFS implementors community. Everything from talks, workshops, discussion circles, hacking time, and more — all focused on advancing IPFS implementations.
We are currently in the middle of sourcing venues for both events and would love to hear your feedback on where we should be looking and why. [Submit your feedback via the following survey](https://docs.google.com/forms/d/e/1FAIpQLScNP2NKgjVBu80IygfioeTH32aCMYASLBrlQ7q05ub3choHKQ/viewform) **by September 25 at 11:59pm PST** to make sure that youre voice is heard!
<a href="https://docs.google.com/forms/d/e/1FAIpQLScNP2NKgjVBu80IygfioeTH32aCMYASLBrlQ7q05ub3choHKQ/viewform" class="cta-button">Fill out the survey</a>

View File

@@ -0,0 +1,87 @@
---
title: Welcome to IPFS News 194!
description: Featuring Durin, Helia advancements, recaps of IPFS Thing 2023 from individual track leads, and much more!
author: ''
date: 2023-06-06
permalink: "/newsletter-194"
translationKey: ''
header_image: "/ipfsnews.png"
tags:
- newsletter
---
The month of May was packed full of new blog posts, þing 2023 recaps, and exciting milestones in the continued growth of IPFS. This edition of the newsletter covers everything from the launch of a new mobile app called Durin to a blog post explaining how to host dynamic content on IPFS using Helia.
Lets dive in…
## **Brand New on IPFS ✨**
[Announcing Durin: a New Mobile App for the IPFS Network](https://blog.ipfs.tech/announcing-durin/)
- Durin is a new experimental app designed to make IPFS more accessible on mobile devices. Whereas in the past interacting with IPFS on mobile was difficult, you can now read and share to IPFS from iPhones and Android devices. [Learn more and download here!](https://blog.ipfs.tech/announcing-durin/)
[How to Host Dynamic Content on IPFS](https://blog.ipfs.tech/2023-how-to-host-dynamic-content-on-ipfs/)
- The new JS implementation of IPFS called Helia is finally here and you can do lots of things with it (like connect to the DHT)! In a recent blog post, [@tabcat00](https://twitter.com/tabcat00) presented a way to host dynamic content on IPFS that utilizes Helia. [Check it out!](https://blog.ipfs.tech/2023-how-to-host-dynamic-content-on-ipfs/)
[IPFS Multi-Gateway Experiment in Chromium](https://blog.ipfs.tech/2023-05-multigateway-chromium-client/)
- John Turpish of Little Bear Labs goes over a new approach to implementing ipfs:// and ipns:// support natively in the browser, using a client-only approach and fetching verifiable responses from multiple HTTP gateways. [Dive in here](https://blog.ipfs.tech/2023-05-multigateway-chromium-client/)!
[js-IPFS deprecation / replaced by Helia](https://blog.ipfs.tech/202305-js-ipfs-deprecation-for-helia/)
- js-IPFS is in the process of being deprecated so you should port your apps to Helia to receive bug fixes, features, and performance improvements moving forwards. [Read more on the IPFS blog](https://blog.ipfs.tech/202305-js-ipfs-deprecation-for-helia/)!
[IPFS Network Measurement Reports](https://github.com/protocol/network-measurements/tree/master/reports/2023)
- If you're interested in IPFS Network performance metrics and network cartography, make sure to check out ProbeLab's Weekly reports! The reports are posted every Monday at the [network-measurements repository on Github](https://github.com/protocol/network-measurements/tree/master/reports/2023) with commentary and discussion happening on [the IPFS Discussion Forum](https://discuss.ipfs.tech/c/testing-and-experiments/35). Make sure to get involved in the discussion and reach out through the discussion forum, the network-measurements repository (by opening an issue), or at the #probe-lab channel in the IPFS Discord, or FIL slack.
[What happens when half the network is down?](https://blog.ipfs.tech/2023-ipfs-unresponsive-nodes/)
- In 90% of networks, or networked systems, this is a grand-scale disaster... but for IPFS it's a very different story. Find out what happens in [a recent incident report published on the blog](https://blog.ipfs.tech/2023-ipfs-unresponsive-nodes/)!
[Introducing Rusty Lassie, a Rust wrapper for Lassie](https://crates.io/crates/lassie)
- A thin library embedding Lassie via CGo and FFI. With Rusty-Lassie, you can easily embed Lassie in your Rust project, start a Lassie HTTP server in a background thread, and retrieve CAR content using any HTTP client. [Learn more about the project here](https://crates.io/crates/lassie)!
## **IPFS Thing Track Recaps 📝**
[Recap: Content Routing (þing 2023)](https://blog.ipfs.tech/2023-ipfs-thing-content-routing-track/)
[Recap: Community & Governance (þing 2023)](https://blog.ipfs.tech/2023-ipfs-community-governance/)
[Recap: HTTP Gateways (þing 2023)](https://blog.ipfs.tech/2023-http-gateways-recap/)
[Recap: IPFS on the Web (þing 2023)](https://blog.ipfs.tech/2023-ipfs-thing-web-track/)
[libp2p at IPFS þing 2023 Recap](https://blog.libp2p.io/2023-libp2p-IPFS-Thing-recap/)
## **Around the Ecosystem 🌎**
[Founders Series, Episode 11: Juan Benet of Protocol Labs [Video]](https://www.youtube.com/watch?v=r-nU_MI2lV4)
- In this talk from LabWeek22 last November in Lisbon, Juan explains the importance of R&D, the lack of funding it receives, and how he hopes to solve this problem with the Protocol Labs Network, an ecosystem of teams based on open source, working together bridge what he calls the innovation chasm — the separation between research and deployment of product. [Watch it on YouTube!](https://www.youtube.com/watch?v=r-nU_MI2lV4)
[Filecoin & IPFS Ecosystem Roundup [Video]](https://youtu.be/kXnSklUL5NE)
- In this revamped monthly public video we give builders and community members a platform to share how theyre making web3 work better for all of us. Please [fill out this form](https://airtable.com/shrcadO9WAnQ5nJvA) to nominate a Team/Project to be featured as a 'Win of the Month'! Join us live the first Thursday of every month, and [watch the May round up now!](https://www.youtube.com/watch?v=kXnSklUL5NE)
[IPNS on Lighthouse](https://twitter.com/nanditmehra/status/1664317411313733634?s=20)
- IPNS support is now live on Lighthouse. Now build creative dapps with the world's best p2p tech for mutable data. Edit and upload your data and build dynamic NFT collections, mutable file systems, and much more with this IPNS support. [See the announcement on Twitter!](https://twitter.com/nanditmehra/status/1664317411313733634?s=20)
[IPFS Open Metaverse Base Camp Accelerator](https://twitter.com/OVioHQ/status/1662062713550299136?s=20)
- We're thrilled to announce the teams making up the latest IPFS Open Metaverse Base Camp accelerator cohort. This 12-week program will accelerate teams leveraging IPFS, Filecoin & [@fvmdev](https://twitter.com/fvmdev), paving the way forwards in the open data economy. [Read all about in this Twitter thead!](https://twitter.com/OVioHQ/status/1662062713550299136?s=20)
[Filebase for Startups](https://filebase.com/startups/)
- Filebase now has a program that offers complimentary IPFS storage and dedicated gateways for startups to scale with. You can learn more about it [on their website](https://filebase.com/startups/).
[Protocol Labs Launch Pad](https://protocol.ai/blog/launchpad-summit-paris-2023/)
- Launchpad is a blend of two key components: a dynamic four-week virtual learning cohort, where residents actively participate in remote learning seminars, and an unforgettable one-week in-person “colo” Summit. [Learn more on the Protocol Labs blog](https://protocol.ai/blog/launchpad-summit-paris-2023/)!
[HackFS kicked off on June 2](https://ethglobal.com/events/hackfs2023)
- Late last week, EthGlobal and Protocol Labs kicked off HackFS 2023 with an incredible summit featuring fireside chats on FVM, presentations on the Protocol Labs builders funnel, and even a talk from a surprise guest. [Check out the event's website to catchup](https://ethglobal.com/events/hackfs2023)!

View File

@@ -4,6 +4,14 @@ type: News coverage
sitemap:
exclude: true
data:
- title: Brave announces automatic NFT backups and enhanced IPFS/Filecoin support in Brave Wallet
date: 2023-05-02
publish_date:
path: https://brave.com/nft-pinning/
tags:
- NFTs
- Brave
- pinning
- title: WebTransport in libp2p
date: 2022-12-19
publish_date:

View File

@@ -0,0 +1,96 @@
---
title: Welcome to IPFS News 195!
description: Featuring a deep-dive into the challenges of measuring decentralized networks + more!
author: ''
date: 2023-07-06
permalink: "/newsletter-195"
translationKey: ''
header_image: "/ipfsnews.png"
tags:
- newsletter
---
As we enter into the summer months, things are slowing down just a bit as people go on holiday or get some much needed R&R, but that doesnt mean we dont have plenty of things to share with you! From a deep-dive into [the challenges of measuring decentralized networks](https://pulse.internetsociety.org/blog/the-challenges-of-measuring-decentralized-networks-the-case-of-the-interplanetary-file-system) to news about Fission adding redirect support of IPFS, this months newsletter will keep you in the loop whether this edition finds you in the office, at home, or on the beach. 🏖️
Lets jump in!
## **Brand New on IPFS ✨**
[Kubo 0.21.0](https://github.com/ipfs/kubo/releases/tag/v0.21.0)
- Saving previously seen nodes for later bootstrapping
- Gateway: `DeserializedResponses` config flag
- `client/rpc` migration of `go-ipfs-http-client`
- Gateway: DAG-CBOR/-JSON previews and improved error pages
- Gateway: subdomain redirects are now `text/html`
- Gateway: support for partial CAR export parameters (IPIP-402)
- `ipfs dag stat` deduping statistics
- Accelerated DHT Client is no longer experimental
[The Challenges of Measuring Decentralized Networks: The Case of the InterPlanetary File System](https://pulse.internetsociety.org/blog/the-challenges-of-measuring-decentralized-networks-the-case-of-the-interplanetary-file-system)
- In this new blog post from the Internet Society's (ISOC) Pulse project, Yiannis Psaras of ProbeLab shares their experience measuring the stability, performance, and cartography of InterPlanetary File System (IPFS), one of the largest decentralized, P2P networks in operation. [Read it here!](https://pulse.internetsociety.org/blog/the-challenges-of-measuring-decentralized-networks-the-case-of-the-interplanetary-file-system)
[June Protocol Labs EngRes All Hands [Video]](https://www.youtube.com/watch?v=7fbhniQJjDw)
- The PL Engineering and Research (EngRes) Workgroup is formed by teams of core protocol developers, network researchers, and experienced contributors in the Protocol Labs Network. The PL EngRes WG mission is to scale and unlock new opportunities for IPFS, Filecoin, libp2p + IPLD, drive breakthroughs in protocol utility & capability, and scale network-native research & development across the PL Network. The PL EngRes WG hosts monthly all hands meetings to check in on progress, and showcase the growth and new capabilities being unlocked by these research & development teams. [Watch it here!](https://www.youtube.com/watch?v=7fbhniQJjDw)
[js-IPFS Deprecation Reminder: Move to Helia!](https://blog.ipfs.tech/202305-js-ipfs-deprecation-for-helia/)
- **js-IPFS is in the process of being deprecated** so you should port your apps to Helia to receive bug fixes, features, and performance improvements moving forwards. [Read more about it here!](https://blog.ipfs.tech/202305-js-ipfs-deprecation-for-helia/)
## **Around the Ecosystem 🌎**
[Fission adds redirect support for IPFS](https://fission.codes/blog/introducing-redirect-support-for-ipfs/)
- Fission is dedicated to building and improving on decentralized web protocols. Redirect support is an officially accepted improvement for IPFS that makes it easier to host modern web applications. [Learn more about it here!](https://fission.codes/blog/introducing-redirect-support-for-ipfs/)
[IPFS data integrated directly into a blockchain explorer](https://twitter.com/al_koii/status/1665817302279880706?s=20)
- Koii's distributed computing platform uses [web3.storage](https://t.co/KyqdyMsAQy) as a convenient integration for IPFS. Thanks to the lightning-fast w3link gateway and easy to use SDK, developers building on Koii have an extra edge as they implement P2P apps, distributed AI, and more. [Check it out in this Twitter thread!](https://twitter.com/al_koii/status/1665817302279880706?s=20)
[Elevate by Outlier Ventures](https://outlierventures.io/elevate/)
- Elevate is a virtual event series focused on spotlighting Outlier Ventures Base Camp Teams. Featuring talks from the very people driving Web3 forward, ELEVATE gives their partners, mentors, program experts and founders themselves an opportunity to showcase the progress theyve made through their 12-week program. At the virtual event on July 6 (today!) youll be able to meet the builders onboarding the next billion users into the Open Metaverse, using IPFS. [Join the event here!](https://outlierventures.io/elevate/)
[ProbeLab Office Hours: IPFS Network Measurements](https://lu.ma/ipfs-network-measurements)
- These open office hours are for anyone interested in network measurements in the IPFS network. The session is hosted by the [ProbeLab](https://blog.ipfs.io/2022-06-15-probelab/) team. During this session, they will discuss issues related to ongoing projects and IPFS network measurement topics more generally with the community. If you're working or are interested in contributing, [make sure to join!](https://lu.ma/ipfs-network-measurements)
[IPFS Multi-Gateway Experiment in Chromium](https://blog.ipfs.tech/2023-05-multigateway-chromium-client/?utm_content=253765483&utm_medium=social&utm_source=twitter&hss_channel=tw-3030006159)
- Learn about a new approach to implementing ipfs:// and ipns:// support natively in the browser, using a client-only approach and fetching verifiable responses from multiple HTTP gateways. [Check out the blog post!](https://blog.ipfs.tech/2023-05-multigateway-chromium-client/?utm_content=253765483&utm_medium=social&utm_source=twitter&hss_channel=tw-3030006159)
[Secure Curves in the Web Cryptography API](https://blogs.igalia.com/jfernandez/2023/06/20/secure-curves-in-the-web-cryptography-api/)
- A new blog post about the collaboration between [@igalia](https://twitter.com/igalia) and [@protocollabs](https://twitter.com/protocollabs) on the implementation of secure curves based on Curve25519 for the Web Cryptography specification. [Read it now!](https://blogs.igalia.com/jfernandez/2023/06/20/secure-curves-in-the-web-cryptography-api/)
[Where to find the Filecoin Community at EthCC](https://fil-paris.io/)
- Looking for the Filecoin community during EthCC? Check out [Filecoin Unleashed](https://filecoinunleashed.io) and [Fil Paris](https://fil-paris.io).
[Accelerate your Web3 journey: Protocol Labs Launchpad Summit on July 16-21](https://protocol.ai/blog/launchpad-summit-paris-2023/)
- Launchpad is a blend of two key components: a dynamic four-week virtual learning cohort, where residents actively participate in remote learning seminars, and an unforgettable one-week in-person “colo” Summit. **[Learn more on the Protocol Labs blog](https://protocol.ai/blog/launchpad-summit-paris-2023/)**
## HackFS Winners 🏅
This years [HackFS hackathon](https://ethglobal.com/events/hackfs2023) has come to a close, and several projects were selected as winners for the IPFS category. If you missed it, [learn more about HackFS here](https://ethglobal.com/events/hackfs2023).
Introducing the HackFS 2023 winners for IPFS…
[Web3Stash](https://ethglobal.com/showcase/web3stash-mn6iu) by [@mbcse50](https://twitter.com/mbcse50)
- Web3Stash is a standard library to get a single API to connect to multiple decentralized storage service providers. [Check it out here!](https://ethglobal.com/showcase/web3stash-mn6iu)
[unid.store](https://t.co/xbh9zYbjm9) by [@_Difint_](https://twitter.com/_Difint_)and [@mr13tech](https://twitter.com/mr13tech)
- Super simple file sharing - decentralized, quick, and without registration. [Take a look here!](https://ethglobal.com/showcase/unid-store-2yukr)
[Fileblox](https://ethglobal.com/showcase/fileblox-y0rjm) by [@Lycaoncreatives](https://twitter.com/LycaonCreatives), [@raldblox](https://twitter.com/raldblox), and [@luckscientist](https://twitter.com/luckscientist)
- FileBlox enables the creation of encrypted NFTs. It solves the right-click-and-save problem for our content creators while letting them get all the benefits of tokenization. [Learn more here!](https://ethglobal.com/showcase/fileblox-y0rjm)
[Star Streamer](https://ethglobal.com/showcase/star-streamer-huakw) by [@msakiart](https://twitter.com/msakiart)
- P2P video streaming service for decentralized content sharing with libp2p, ipfs and hypercore. [Check it out here!](https://ethglobal.com/showcase/star-streamer-huakw)

View File

@@ -0,0 +1,73 @@
---
title: Welcome to IPFS News 196!
description: Featuring news about a resilient and fully-automated infrastructure to monitor the performance of the IPFS network.
author: ''
date: 2023-08-09
permalink: "/newsletter-196"
translationKey: ''
header_image: "/ipfsnews.png"
tags:
- newsletter
---
## **An Observatory for the IPFS Network 🔭**
We're excited to share that the ProbeLab team has worked hard over the past year to build a resilient and fully-automated infrastructure to monitor the performance of core IPFS stack protocols. The debut of this new measurement platform is big news, and you can learn all about it in a new post on the IPFS blog.
<a href="https://blog.ipfs.tech/2023-ipfs-observatory/" class="cta-button">Read the blog post</a>
![](../assets/probelab.png)
## **Brand New on IPFS ✨**
[A Rusty Bootstrapper](https://blog.ipfs.tech/2023-rust-libp2p-based-ipfs-bootstrap-node/)
- As of July 13, 2023, one of the four "public good" IPFS bootstrap nodes operated by Protocol Labs has been running rust-libp2p-server instead of Kubo, which uses go-libp2p. rust-libp2p-server is a thin wrapper around rust-libp2p. We run both Kubo and rust-libp2p-server on IPFS bootstrap nodes to increase resilience. [Read more about it on the IPFS blog!](https://blog.ipfs.tech/2023-rust-libp2p-based-ipfs-bootstrap-node/)
[Dogfooding Announcement: IPFS-Companion Manifest v3 Changes](https://discuss.ipfs.tech/t/announcing-ipfs-companion-mv3-rc-beta/16442/7)
- The PL EngRes Ignite team has achieved a significant milestone the completion of IPFS-Companion Manifest v3 changes! IPFS-Companion is browser extension that makes browsing the IPFS web simpler. These changes promise to greatly enhance compatibility with browsers going forward and offer performance improvements. [Read about it and get involved here!](https://discuss.ipfs.tech/t/announcing-ipfs-companion-mv3-rc-beta/16442/7)
[IPFS Events Planning Meeting](https://lu.ma/ipfseventsplanning)
- The events team is kicking off a new Events Planning Call today. If you're interested in joining or participating, you can find these meetings on the [IPFS Community Calendar](lu.ma/ipfs), or you can register directly at [lu.ma/ipfseventsplanning](lu.ma/ipfseventsplanning). Today's agenda will be to discuss timing for IPFS Camp and Thing for 2024. Timing will affect the locations that make the shortlist. [Join us here!](https://lu.ma/ipfseventsplanning)
[Boxo v0.11.0](https://github.com/ipfs/boxo/blob/release-v0.11.0/CHANGELOG.md)
## **Around the Ecosystem 🌎**
[Guide: Setting Up a Website on the Distributed Web using Distributed Press](https://medium.com/@lindsay_walker/setting-up-a-website-on-the-distributed-web-7eae22594303)
- Distributed Press is a tool used to easily host content on distributed, peer-to-peer protocols such as IPFS and Hypercore, using open source tools created by the Distributed Press project. Publishing a static site on distributed protocols means that your website is more resilient and likely to stand the test of time. [Learn how to do it here!](https://medium.com/@lindsay_walker/setting-up-a-website-on-the-distributed-web-7eae22594303)
[Anytype: A private hub for all your data](https://anytype.io/)
- Meet Anytype, a private hub for all your data: docs, tasks, files, bookmarks, contacts and more. Its built on a new architecture that protects your privacy and data sovereignty, even when working across devices. Use it to create elegant dashboards, documents, and knowledge graphs. [Try it out here!](https://anytype.io/)
[Fleek Network announces new edge platform](https://twitter.com/fleek_net/status/1685997861907890176)
- Fleek Network's new platform utilizes IPFS/IPLD as the addressability and performance layer of data on the network. [Learn more here!](https://twitter.com/fleek_net/status/1685997861907890176)
[Admarus: A Peer-to-Peer Search Engine for IPFS](https://blog.admarus.net/blog/mvp-release/)
- A decentralized search engine for the decentralized web (specifically, IPFS). [Check it out here!](https://blog.admarus.net/blog/mvp-release/)
[Beloga: A Decentralized Blogging Platform](https://discuss.ipfs.tech/t/beloga-decentralized-blogging-platform-powered-by-ipfs/16727)
- This new blogging platform has IPFS at its core with posts being securely stored and decentralized, making them tamper-proof and censorship-resistant. [See it for yourself here!](https://discuss.ipfs.tech/t/beloga-decentralized-blogging-platform-powered-by-ipfs/16727)
[Filebase introduces custom comain support for dedicated IPFS gateways](https://filebase.com/blog/introducing-custom-domain-support-for-dedicated-ipfs-gateways/)
- With the introduction of custom domain support, users can now attach their domain names to their dedicated gateways, bolstering their brand consistency and accessibility. [Learn more about it in this blog post announcement!](https://filebase.com/blog/introducing-custom-domain-support-for-dedicated-ipfs-gateways/)
[IPFSnodes.com](https://ipfsnodes.com/)
- A community created dashboard with lots of data and information about the IPFS network and its nodes. [Take a look at it here!](https://ipfsnodes.com/)
[Open Data Hackathon](https://www.encode.club/open-data-hack)
- This upcoming hackathon features a $1,000 IPFS bounty. [Learn more and get involved here!](https://www.encode.club/open-data-hack)https://www.encode.club/open-data-hack
[The Evolution of Filecoin and IPFS: An Overview of Challenges and Future Opportunities](https://medium.com/filemarket-xyz/the-evolution-of-filecoin-and-ipfs-an-overview-of-challenges-and-future-opportunities-795ce237c4b6)
- A new article on FileMarket about the evolution of Filecoin and IPFS that is based on an AMA with Juan Benet during Filecoin Unleashed Paris 2023. Read through it here!

View File

@@ -0,0 +1,52 @@
---
title: Welcome to IPFS News 197!
description: Featuring an announcement about the new Ecosystem Working Group and Kubo v0.22.0
author: ''
date: 2023-09-12
permalink: "/newsletter-197"
translationKey: ''
header_image: "/ipfsnews.png"
tags:
- newsletter
---
## **Introducing the Ecosystem Working Group 🔭**
Since its initial release over 9 years ago, IPFS has been stewarded by a variety of teams and individual contributors, both within and outside of Protocol Labs. More recently though, it has lacked a dedicated team focused on nothing other than the success of the IPFS ecosystem. It is with this reality in mind that we are excited to announce the formation of **[the brand new IPFS Ecosystem Working Group](https://blog.ipfs.tech/2023-introducing-the-ecosystem-working-group/)**!
<a href="https://blog.ipfs.tech/2023-introducing-the-ecosystem-working-group/" class="cta-button">Read the blog post</a>
## **Brand New on IPFS ✨**
[Kubo v0.22.0](https://github.com/ipfs/kubo/releases/tag/v0.22.0)
- Gateway: support for order= and dups= parameters (IPIP-412)
- ipfs name publish now supports V2 only IPNS records
- IPNS name resolution has been fixed
- go-libp2p v0.29.0 update with smart dialing
[IPFSConnect Istanbul](https://istanbul2023.ipfsconnect.org/)
- Have you checked out [IPFSConnect](https://twitter.com/IPFSConnect) yet? It's a community-run meetup of developers and designers building on top of IPFS. Join us in Istanbul on Nov 16th for a day of workshops + talks! [Learn more here!](https://istanbul2023.ipfsconnect.org/)
## **Around the Ecosystem 🌎**
[LabWeek23 is happening November 13-17](https://23.labweek.io/)
- Have you booked your travel yet? [LabWeek23](https://23.labweek.io/) is happening in Istanbul, Türkiye, from November 13-17, alongside Devconnect! This is your chance to connect and collaborate with visionaries and teams that are domain leaders in ZK Proofs, AI and blockchain, DeSci, decentralized storage, gaming in Web3, public goods funding, cryptoeconomics, and much more. [Learn more about it here!](https://23.labweek.io/)
[A Beginner's Guide to IPFS Content Addressing](https://filebase.com/blog/a-beginners-guide-to-ipfs-content-addressing/)
- Learn how to harness the power of the InterPlanetary File System for seamless content distribution by checking out this comprehensive guide to IPFS content addressing by Filebase. [Read it here!](https://filebase.com/blog/a-beginners-guide-to-ipfs-content-addressing/)
[Fleek's new app is in closed alpha](https://blog.fleek.xyz/post/fleekxyz-alpha-release/)
- "The day is finally herethe first step of the new Fleek, both app and brand ⚡ Lets set the stage: This is not the full release of the new Fleek app. Today marks the start of our initial closed testing phase, leading up to an open testing period, and later in September, the full v1 release of the new app." [Learn more in this blog post!](https://blog.fleek.xyz/post/fleekxyz-alpha-release/)
[New git-ipfs remote bridge](https://twitter.com/momack28/status/1697072752266706979?s=20)
- "Love git and IPFS? There's a new git-ipfs remote bridge that lets you snapshot new git releases to IPFS for self-hosting, immutable versioning, and decentralized replication. Go InterPlanetary!" [Check it out here!](https://github.com/ElettraSciComp/Git-IPFS-Remote-Bridge)
[Encrypted file support will be added to Cedalio soon](https://medium.com/@cedalio/product-update-uploading-files-has-never-been-easier-7b328def728a)
- "Introducing the ability to define file types within your GraphQL schema, while we handle the rest. To stay in sync with our company core values, we store files in IPFS, a decentralized peer-to-peer protocol for storing and sharing files across a distributed network. As of now, files stored in IPFS are not encrypted; however, were excited to announce that support for encryption will be added in early October." [Learn more via their product update!](https://medium.com/@cedalio/product-update-uploading-files-has-never-been-easier-7b328def728a)

View File

@@ -1,5 +1,40 @@
---
data:
- title: 'Just released: Kubo 0.22.0!'
date: "2023-08-08"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.22.0
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.21.0!'
date: "2023-07-03"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.21.0
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.20.0!'
date: "2023-05-09"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.20.0
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.19.2!'
date: "2023-05-03"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.19.2
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.19.1!'
date: "2023-04-05"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.19.1
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.19.0!'
date: "2023-03-20"
publish_date: null

View File

@@ -0,0 +1,72 @@
---
title: Welcome to IPFS News 192!
description: Learn about IPFS Thing 2023, Community Impact Awards, and much more in
this month's round-up of IPFS news.
author: ''
date: 2023-03-27
permalink: "/newsletter-192"
translationKey: ''
header_image: "/ipfsnews.png"
tags:
- weekly
---
## **Register for IPFS Thing 2023 🎟️**
Were only a few weeks away from this years week-long gathering for the IPFS implementers community. [IPFS Thing 2023](https://2023.ipfs-thing.io/) is happening from **April 15-19 in Brussels, Belgium** and will include everything from talks, workshops, discussion circles, hacking time, and more — all focused on advancing IPFS implementations.
[**Grab your tickets today before time runs out!**](https://2023.ipfs-thing.io/) Use code “THING23” at checkout by March 31 to get them for only $299.
For those attending, you can also [submit a talk or track online.](https://2023.ipfs-thing.io/submit/)
## **Vote in the IPFS Community Impact Awards 🏅**
The next round of IPFS Community Impact Awards will be awarded very soon — in April 2023. In this upcoming round, wed like to invite the broader community to vote! If any of these are true, youre eligible to vote and help recognize the most valuable projects that are advancing the IPFS project and community!
* You attended IPFS þing 2022, or
* You contributed to the Kubo, Helia, Nabu, or Iroh repos prior to 2022.03.22, or
* You are building a project within the [**IPFS Fund Scope**](https://www.youtube.com/watch?v=YfpnGPYddK8&t=772s)
If you meet any of the above criteria, please [submit this eligibility form ](https://airtable.com/shrXvfDLEoYjFGWV9)and look out for a confirmation. For questions, please email [**impact-evaluator@protocol.ai**](mailto:impact-evaluator@protocol.ai).
## **Brand New on IPFS ✨**
[Kubo v0.19.0](https://github.com/ipfs/kubo/releases/tag/v0.19.0)
* Improving the libp2p resource management integration
* Gateways
* Signed IPNS Record response format
* Example fetch and inspect IPNS record
* Addition of "autoclient" router type
* Deprecation of the `ipfs pubsub` commands and matching HTTP endpoints
[Cluster v1.0.6](https://github.com/ipfs-cluster/ipfs-cluster/releases/tag/v1.0.6)
* IPFS Cluster v1.0.6 is a maintenance release with some small fixes. The main change in this release is that pebble becomes the default datastore backend.
[Brand new 3-part intro to IPFS in the docs](https://docs.ipfs.tech/concepts/what-is-ipfs/)
* As part of our ongoing efforts to make the IPFS docs even better, weve created a new section that does a better job at covering the basics. [Check it out for yourself!](https://docs.ipfs.tech/concepts/what-is-ipfs/)
[Helia Demo Day recordings](http://ipfs.fyi/helia-demo)
* If you missed the live events about Helia (the new implementation of IPFS in JavaScript), then you can [catch up](http://ipfs.fyi/helia-demo) thanks to these handy videos!
[IPLD: The Data Layer of the Decentralized Web](https://blog.ipfs.tech/ipld-the-new-data/)
* A recently published post on the IPFS blog goes in-depth into how IPLD does more than just provide a universal way to move data to where users want; it also enables all future applications to be built on top of every prior applications data. [Read it now](https://blog.ipfs.tech/ipld-the-new-data/)!
[New IPFS website coming soon](https://twitter.com/IPFS/status/1638225746010185760?s=20)
* We recently posted a teaser on Twitter for the upcoming ipfs.io website redesign. Weve been hard at work revamping the entire IPFS website, and were excited for it to go live in the near future — stay tuned!
## **Around the Ecosystem 🌎**
* [The IPFS community was at GDC last week](https://twitter.com/IPFS/status/1638253848711032851?s=20) with [3SGame Studio](https://www.3studio.online), makers of the [IPFS plugin for Unreal Engine](https://www.unrealengine.com/marketplace/en-US/product/ipfs). The plugin speeds up game loading by up to 40x.
* Want to see how others are scaling IPFS? [Here's how Near Form designed Elastic IPFS](https://www.nearform.com/blog/designing-cloud-based-architecture-with-infinite-scalability-elastic-ipfs-provider/), a cloud-native implementation of IPFS with infinite horizontal scalability that powers [NFT.Storage](https://nft.storage) and [Web3.Storage](http://web3.storage)
* [Check out this great Twitter thread from @jalil_eth](https://twitter.com/jalil_eth/status/1628176052764942338?s=20) that explains how IPFS addresses work and why they are so important for NFTs! With real-world examples!
* We love it when people host their websites using IPFS, and [this helpful guide by @juanbeencoding from Fleek](https://blog.fleek.xyz/post/hosting-on-ipfs-best-practices-troubleshooting/) covers some excellent best-practices for how to do it effectively.
* [Learn how to throw your ebook library at IPFS](https://dustri.org/b/how-to-throw-your-ebook-library-at-ipfs.html) in this recently released tutorial.
* [Academic research on implementing Swarms Alpha Entanglement in IPFS](https://twitter.com/IPFS/status/1633367698724798464?s=20) using IPFS Cluster to increase reliability.
* [Fireproof](https://twitter.com/FireproofStorge) is a new dynamic database product created by [@jchris](https://twitter.com/jchris) that has IPFS at its core. Use it to quickly add dynamic data to any app or page.
* [A new tool](https://nouns.build) from Zora called Nouns Builder enables anyone to create a Nouns-style DAO without any code. The best part? It utilizes the power of IPFS. The next generation of DAOs will have a solid data foundation.

View File

@@ -0,0 +1,78 @@
---
title: Welcome to IPFS News 193!
description: Featuring Bluesky, a recap of IPFS Thing 2023, Brave's enhanced IPFS support, content blocking in Kubo, and much more!
author: ''
date: 2023-05-09
permalink: "/newsletter-193"
translationKey: ''
header_image: "/ipfsnews.png"
tags:
- newsletter
---
A lot has happened since the previous newsletter over a month ago. [IPFS Thing took place in Brussels](https://blog.ipfs.tech/2023-ipfs-thing-recap/), we created a [Bluesky](https://blog.ipfs.tech/2023-ipfs-on-bluesky/) account, [Brave released automatic NFT backups to IPFS](https://brave.com/nft-pinning/), [content blocking can now be enabled in Kubo](https://blog.ipfs.tech/2023-content-blocking-for-the-ipfs-stack/), plus so much more! Read on to catch up with whats happened in the ecosystem over the last few weeks.
## **Recap: IPFS Thing 2023 🔄**
The IPFS implementers community recently gathered in Brussels, Belgium for the second year of [IPFS þing](https://2023.ipfs-thing.io/), an annual gathering dedicated to advancing IPFS implementation. With 12 tracks and over 75 talks, demos, and sessions, the 5-day summit that occurred in April 2023 was a showcase of recent advances across IPFS, a forum for sharing needs from the protocol, and an opportunity to chart new directions for the future of IPFS.
[Read the recap on the blog for photos, videos, and summaries!](https://blog.ipfs.tech/2023-ipfs-thing-recap/)
## **Brand New on IPFS ✨**
**[IPFS is now on Bluesky!](https://blog.ipfs.tech/2023-ipfs-on-bluesky/)**
* Were excited to share that IPFS now has an official presence on [Bluesky](https://blueskyweb.xyz/)! We chose[ ](https://twitter.com/bluesky)Bluesky because it shares many of the same values and goals that the IPFS ecosystem has. Additionally, they actively utilize IPLD and content addressing. [Read more about it](https://blog.ipfs.tech/2023-ipfs-on-bluesky/)!
**[Content Blocking for the IPFS stack is finally here!](https://blog.ipfs.tech/2023-content-blocking-for-the-ipfs-stack/)**
* Traditionally, content blocking within the IPFS ecosystem has been performed only at the IPFS gateway level and directly in Nginx, using something called the "Badbits denylist" — but now it can be enabled in Kubo & other tools in the IPFS stack too! [Check out the blog post for more info.](https://blog.ipfs.tech/2023-content-blocking-for-the-ipfs-stack/)
**[What happens when half of the network is down?](https://blog.ipfs.tech/2023-ipfs-unresponsive-nodes/)**
* The IPFS DHT experienced a serious incident in the beginning of 2023, but users hardly noticed thanks to the power of a decentralized network. [Read all about it in a new incident report!](https://blog.ipfs.tech/2023-ipfs-unresponsive-nodes/)
**[IPFS Principles](https://specs.ipfs.tech/architecture/principles/)**
* As mentioned above, IPFS recently joined a new social media network called [Bluesky](https://blueskyweb.xyz/) because it shares many of the same values that the IPFS ecosystem has. But what are those values exactly? You can [read all about IPFS Principles in a new specs doc](https://specs.ipfs.tech/architecture/principles/) edited by[ Robin Berjon](https://twitter.com/robinberjon).
**[Kubo 0.20.0](https://github.com/ipfs/kubo/releases/tag/v0.20.0)**
* This update includes:
* Switch to `boxo/gateway` library
* Improved testing
* Trace Context support
* Removed legacy features
**[Kubo 0.19.2](https://github.com/ipfs/kubo/releases/tag/v0.19.2)**
**[Kubo 0.19.1](https://github.com/ipfs/kubo/releases/tag/v0.19.1)**
## **Around the Ecosystem 🌎**
* [Brave announces automatic NFT backups and enhanced Filecoin support in Brave Wallet](https://brave.com/nft-pinning/)
* We're excited to share that the latest version of Braves web browser introduces automatic NFT backups to IPFS. Brave Wallet users can avoid the permanent loss of NFT metadata and gain peace of mind thanks to this new feature. [Check it out!](https://brave.com/nft-pinning/)
* [Introducing Lassie - a retrieval client for IPFS and Filecoin](https://blog.ipfs.tech/2023-introducing-lassie/)
* Lassie makes it easy to fetch your data from both the IPFS and Filecoin Network - it will find and fetch content over the best retrieval protocols available. [Read more about it on the IPFS blog](https://blog.ipfs.tech/2023-introducing-lassie/)!
* [IPFS Implementations: Its Definitely A Thing](https://blog.ipfs.tech/2023-03-implementation-principles/)
* In a new blog post,[ Robin Berjon](https://twitter.com/robinberjon) talks about how the world of IPFS implementations has diversified greatly over the past 9 months: “Springtime in the distributed hemisphere and we are frolicking across fields of tantalizing IPFS flowers.” [Read the entire blog post](https://blog.ipfs.tech/2023-03-implementation-principles/)!
* [IPFS Open Metaverse Base Camp Accelerator](https://outlierventures.io/ipfs-open-metaverse-base-camp/)
* The latest cohort kicked-off on May 8, 2023. Co-delivered by Protocol Labs and Outlier Ventures, the program will run for 12 weeks and provide the teams in the cohort with the knowledge, networks, and capital they need to succeed as startups in Web3. Teams will pitch their products and services at Demo Day in August. [Visit the website to learn more!](https://outlierventures.io/ipfs-open-metaverse-base-camp/)
## **IPFS Thing 2023 on YouTube 📺**
All of the talks and presentations from this years gathering of the IPFS implementers community are now available on YouTube. If you werent able to attend, now is the perfect chance to catch up! Below you will find links to playlists for each content track:
* [Opening & Keynotes](https://www.youtube.com/playlist?list=PLuhRWgmPaHtRnO5G2EF0RxYebcQzLDf5F)
* [Community & Governance](https://www.youtube.com/playlist?list=PLuhRWgmPaHtTIFbOVO5YfXkoFg6wIGbBN)
* [Integrating IPFS](https://www.youtube.com/playlist?list=PLuhRWgmPaHtTI0MS6ZjSJjBxZp7rcjSS_)
* [Decentralized Compute & AI](https://www.youtube.com/playlist?list=PLuhRWgmPaHtQ_lKtbTR-vIW1LYuTjcaPw)
* [HTTP Gateways](https://www.youtube.com/playlist?list=PLuhRWgmPaHtTapMgLW7rRh92Tk8u7wip5)
* [Content Routing](https://www.youtube.com/playlist?list=PLuhRWgmPaHtRBWV3SvInC5ATS8aKV3lsW)
* [Interplanetary Databases](https://www.youtube.com/playlist?list=PLuhRWgmPaHtTO8hr2CYiJPTSe7wybW_op)
* [IPFS on the Web](https://www.youtube.com/playlist?list=PLuhRWgmPaHtQ-TO65P62tqfUM85HCIqSj)
* [Data Transfer](https://www.youtube.com/playlist?list=PLuhRWgmPaHtS6WBDGK8oxcBHA6ILKatVk)
* [IPFS Deployments & Operators](https://www.youtube.com/playlist?list=PLuhRWgmPaHtTYOY5l8nehP_Vt6Ek-svrp)
* [Measuring IPFS](https://www.youtube.com/playlist?list=PLuhRWgmPaHtQkkbiq-PbIkt9_S2NjJz6x)

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 495 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 434 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 682 KiB

BIN
src/assets/Lassie.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 916 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 842 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1014 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 310 KiB

BIN
src/assets/brave-choice.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 211 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 322 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 255 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 216 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 448 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 204 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 994 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 175 KiB

View File

@@ -0,0 +1 @@
Placeholder

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Some files were not shown because too many files have changed in this diff Show More