Compare commits

..

5 Commits

Author SHA1 Message Date
chase.fil
c33784f485 Merge branch 'main' into dependabot/github_actions/actions/checkout-4 2023-09-27 12:29:13 -06:00
chase.fil
c57ff65ecb Merge branch 'main' into dependabot/github_actions/actions/checkout-4 2023-09-26 13:38:32 -06:00
Chris Waring
baa3918a5e Merge branch 'main' into dependabot/github_actions/actions/checkout-4 2023-09-18 17:13:23 +01:00
Chris Waring
0226e6508c Merge branch 'main' into dependabot/github_actions/actions/checkout-4 2023-09-18 17:08:32 +01:00
dependabot[bot]
9ab33ca6d3 Bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-11 12:59:45 +00:00
29 changed files with 67 additions and 938 deletions

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Repo
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Compress Images
uses: calibreapp/image-actions@main

View File

@@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Install dependencies
run: npm ci
- name: Check for scheduled posts

View File

@@ -27,7 +27,7 @@ jobs:
steps:
- name: Checkout repo
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Sync target branch

View File

@@ -50,8 +50,8 @@ Now edit the metadata at the top of the file.
- `description` - used as the meta description tag on the post-page. **required**
- `date` - the "_published at_" date, shown on the [blog index page](https://blog.ipfs.io), please update at posting time to reflect current date - **required** (posts will not be displayed until this date on the live blog, but you will see them locally when using `make dev`)
- `author` - used to give you credit for your words - **required**
- `permalink` - the path to the blog post. Please start and end URLs with a `/` (`/my/url/`). **required**
- `tags` - used to categorize the blog post
- `permalink` - can be used to override the post URL if needed. Please start and end URLs with a `/` (`/my/url/`).
- `header_image` - name of the image displayed on the [blog homepage](https://blog.ipfs.tech/). See [Custom header image](#custom-header-image) for more details.
#### Custom header image

16
package-lock.json generated
View File

@@ -6364,9 +6364,9 @@
}
},
"node_modules/caniuse-lite": {
"version": "1.0.30001549",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001549.tgz",
"integrity": "sha512-qRp48dPYSCYaP+KurZLhDYdVE+yEyht/3NlmcJgVQ2VMGt6JL36ndQ/7rgspdZsJuxDPFIo/OzBT2+GmIJ53BA==",
"version": "1.0.30001470",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001470.tgz",
"integrity": "sha512-065uNwY6QtHCBOExzbV6m236DDhYCCtPmQUCoQtwkVqzud8v5QPidoMr6CoMkC2nfp6nksjttqWQRRh75LqUmA==",
"dev": true,
"funding": [
{
@@ -6376,10 +6376,6 @@
{
"type": "tidelift",
"url": "https://tidelift.com/funding/github/npm/caniuse-lite"
},
{
"type": "github",
"url": "https://github.com/sponsors/ai"
}
]
},
@@ -30071,9 +30067,9 @@
}
},
"caniuse-lite": {
"version": "1.0.30001549",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001549.tgz",
"integrity": "sha512-qRp48dPYSCYaP+KurZLhDYdVE+yEyht/3NlmcJgVQ2VMGt6JL36ndQ/7rgspdZsJuxDPFIo/OzBT2+GmIJ53BA==",
"version": "1.0.30001470",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001470.tgz",
"integrity": "sha512-065uNwY6QtHCBOExzbV6m236DDhYCCtPmQUCoQtwkVqzud8v5QPidoMr6CoMkC2nfp6nksjttqWQRRh75LqUmA==",
"dev": true
},
"caseless": {

View File

@@ -5,7 +5,7 @@
>
<div class="flex-shrink lg:max-w-sm xl:max-w-xl mb-4 lg:mb-0">
<h2 class="type-h2">Stay informed</h2>
<p class="mt-2 mb-6 mr-2">
<p class="mt-2 mr-2">
Sign up for the IPFS newsletter (<router-link
:to="latestWeeklyPost ? latestWeeklyPost.path : ''"
class="text-blueGreenLight hover:underline"
@@ -13,43 +13,69 @@
>) for the latest on releases, upcoming developments, community events,
and more.
</p>
<a target="_blank" href="https://ipfs.fyi/newsletter">
<button
type="button"
class="
px-3
py-2
text-white
bg-blueGreen
font-semibold
rounded
hover:bg-blueGreenScreen
transition
duration-300
"
>
Sign up
</button>
</a>
</div>
<div
<form
id="mc-embedded-subscribe-form"
name="mc-embedded-subscribe-form"
class="flex lg:justify-end max-w-lg xl:w-2/5"
action="https://ipfs.us4.list-manage.com/subscribe/post?u=25473244c7d18b897f5a1ff6b&amp;id=cad54b2230"
method="post"
target="_blank"
@submit="subscribeClick"
>
<div id="mc_embed_signup_scroll" class="grid gric-col-2 w-full">
<div class="fields flex flex-col sm:flex-row col-start-1 col-span-2">
<div class="sm:ml-4 sm:pt-0"></div>
<input
id="mce-EMAIL"
v-model="email"
required
type="email"
aria-label="Email Address"
class="flex-grow text-black p-2 rounded"
placeholder="email@your.domain"
name="EMAIL"
/>
<div class="sm:ml-4 sm:pt-0 pt-2">
<input
id="mc-embedded-subscribe"
type="submit"
value="Subscribe"
name="subscribe"
class="p-2 text-white font-semibold bg-blueGreen hover:bg-blueGreenScreen transition duration-300 rounded cursor-pointer w-full"
/>
</div>
</div>
<label class="pt-2 col-start-1 col-span-2" for="gdpr_28879">
<input
id="gdpr_28879"
type="checkbox"
class=""
required
name="gdpr[28879]"
value="Y"
/><span class="pl-2">Please send me the newsletter</span>
</label>
</div>
</div>
<div id="mergeRow-gdpr">
<div style="position: absolute; left: -5000px" aria-hidden="true">
<input
type="text"
name="b_25473244c7d18b897f5a1ff6b_cad54b2230"
tabindex="-1"
value=""
/>
</div>
<!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
</div>
</form>
</div>
</template>
<script>
import { mapState } from 'vuex'
import countly from '../../util/countly'
export default {
name: 'NewsletterForm',
props: {},
@@ -59,6 +85,10 @@ export default {
computed: {
...mapState('appState', ['latestWeeklyPost']),
},
methods: {},
methods: {
subscribeClick() {
countly.trackEvent(countly.events.NEWSLETTER_SUBSCRIBE)
},
},
}
</script>

View File

@@ -19,14 +19,7 @@
:block-lazy-load="blockLazyLoad"
/>
<div
class="
grid-margins
pt-8
grid grid-cols-1
md:grid-cols-2
lg:grid-cols-3
gap-8
"
class="grid-margins pt-8 grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-8"
itemscope
itemtype="http://schema.org/Blog"
>
@@ -44,17 +37,7 @@
class="flex justify-center mt-8 pb-4"
>
<button
class="
px-3
py-2
text-white text-xl
bg-blueGreen
font-semibold
rounded
hover:bg-blueGreenScreen
transition
duration-300
"
class="px-3 py-2 text-white text-xl bg-blueGreen font-semibold rounded hover:bg-blueGreenScreen transition duration-300"
@click="handleLoadMoreClick"
>
Load More
@@ -344,7 +327,7 @@ export default {
(item) =>
item.frontmatter &&
item.frontmatter.tags &&
item.frontmatter.tags.find((tag) => tag.name === 'newsletter')
item.frontmatter.tags.find((tag) => tag.name === 'weekly')
)
.sort(
(a, b) => new Date(b.frontmatter.date) - new Date(a.frontmatter.date)

View File

@@ -1,130 +0,0 @@
---
title: 'Introducing Nabu: Unleashing IPFS on the JVM'
description: 'Learn about a new fast IPFS implementation in Java'
author: Ian Preston
date: 2023-11-07
permalink: '/2023-11-introducing-nabu/'
header_image: '/nabu-banner-2023.png'
tags:
- 'ipfs'
- 'nabu'
- 'bitswap'
---
Greetings from the [Peergos](https://peergos.org) team! We are thrilled to unveil what we've been working on this year: [Nabu](https://github.com/peergos/nabu) our sleek and versatile Java implementation of IPFS. Named after the ancient Mesopotamian god of literacy, rational arts, and wisdom, Nabu makes decentralised data storage and retrieval available to the large JVM ecosystem. It's now *production ready*, as we are using it in Peeergos - a decentralised, secure file storage, sharing and social network.
## Introducing Nabu: Empowering Java with IPFS Magic
At its core, Nabu is a minimal IPFS implementation for storing and retrieving data blocks over the libp2p protocol. But we didn't stop there we've also added a touch of innovation with features like authed bitswap. This addition enables the creation of private data blocks, accessible only to those with authorized permissions. Intrigued? Dive into the finer details of this innovation in our dedicated post on [authed bitswap](https://peergos.org/posts/bats).
Our journey in crafting Nabu involved the implementation of additional libp2p protocols, including:
* Kademlia (including IPNS): The very backbone of IPFS, aiding in the discovery of blocks and their owners.
* Bitswap + Auth Extension: A protocol that facilitates the exchange of data blocks.
We built upon the solid foundation of [jvm-libp2p](https://github.com/libp2p/jvm-libp2p). As we delved deeper, we realized the need to implement several crucial components. These include the [yamux muxer](https://github.com/libp2p/jvm-libp2p/tree/develop/libp2p/src/main/kotlin/io/libp2p/mux/yamux), the [TLS security provider](https://github.com/libp2p/jvm-libp2p/blob/develop/libp2p/src/main/kotlin/io/libp2p/security/tls/TLSSecureChannel.kt) (complete with ALPN early muxer negotiation), and a substantial portion of a quic transport (still a work in progress). While much of this effort started in a fork, we collaborated with [Consensys](https://consensys.io) to upstream our contributions into the main project which has now released [v1.0.0](https://github.com/libp2p/jvm-libp2p/releases/tag/1.0.0) as a result. This is used in [Teku](https://github.com/ConsenSys/teku), a Java Ethereum 2 implementation.
[<img src="../assets/nabu/modules.png" width="500" height="300"/>](../assets/nabu/modules.png)
Nabu's API empowers developers with the following methods:
* [id](https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-id)
* [version](https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-version)
* [block/get](https://github.com/Peergos/nabu/blob/master/src/main/java/org/peergos/BlockService.java#L12)
* [block/put](https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-block-put)
* [block/rm](https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-block-rm)
* [block/stat](https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-block-stat)
* [block/has](https://github.com/Peergos/nabu/blob/master/src/main/java/org/peergos/blockstore/Blockstore.java#L25)
* [refs/local](https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-refs-local)
* [bloom/add](https://github.com/Peergos/nabu/blob/master/src/main/java/org/peergos/blockstore/Blockstore.java#L37)
* [dht/findprovs](https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-dht-findprovs)
Most of these functions align with [Kubo](https://github.com/ipfs/kubo), but we've added block/has, which is a much more efficient way to ask if we have a block or not, as well as bloom/add which is useful if you are adding blocks to the blockstore externally (typically with multiple servers and S3 blockstore, and using a bloom filter). In addition we've added a few extra optional parameters to block/get, which you'll hear more about in the Performance section below.
## Unique Features
Nabu boasts some distinctive features that simplify building on IPFS:
* P2P HTTP Proxy: This feature facilitates HTTP requests to listening peers, encrypting communication over libp2p streams. Bid farewell to the complexities of TLS certificate authorities and DNS.
* Built-in S3 Blockstore: Seamlessly integrate external blockstores like S3.
* [Infini-Filter](https://dl.acm.org/doi/10.1145/3589285): A bloom filter replacement that offers infinite expandability.
* Peer-Specific Block Retrieval: Nabu empowers developers to fetch blocks from specific peers, streamlining data retrieval and improving privacy (See Performance section below).
Let's shed some light on the first of these gems the P2P HTTP proxy. A component we initially implemented in [Kubo in 2018](https://peergos.org/posts/dev-update#Decentralization) (behind an experimental flag), this feature introduces a new gateway endpoint with paths in the format:
**/p2p/$peerid/http/**
Its function is simple yet transformative: it proxies incoming HTTP requests to the specified $peerid while trimming the preceding "/p2p/$peerid/http" path. On the other end, the setup forwards incoming requests to a designated endpoint. This paradigm grants the convenience of traditional HTTP-based architecture, sans the complexities of DNS and TLS certificate authorities. By addressing the node using its public key, secure connections become effortlessly achievable. The diagram below illustrates how we use this proxy in Peergos.
[<img src="../assets/nabu/p2p-http-proxy.png" width="640" height="340"/>](../assets/nabu/p2p-http-proxy.png)
For a simpler example of using this, see our single file demo [chat app](https://github.com/Peergos/nabu-chat/blob/main/src/main/java/chat/Chat.java).
## Performance
### Faster, more private block retrieval
Drawing from experience, we recognized the inefficiency of requesting every single block from the DHT or connected peers. This practice leads to excessive bandwidth consumption and sluggish content retrieval. Enter our solution: a new optional parameter,"peers" in block/get allowing retrieval from pre-specified peer IDs. In cases of unreachability, a DHT lookup through dht/findprovs api serves as a fallback option. This design of taking a set of peerids that you want to ask for blocks, encourages users to design their programs to route at a higher level than blocks, improving speed, bandwidth usage and privacy. Many apps will know which peers they want to retrieve data from in advance, and with this parameter they can massively reduce bandwidth and speed up retrieval. The motto here is "Route your peers, not your data". In Peergos, for example, given a capability to a file we can lookup the file owner's home server (specifically its peer id) and directly send bitswap requests there, so we only need to fallback to DHT lookups if their home server is unreachable.
### Reduced bandwidth and CPU usage
We believe that *providing* (announcing to the DHT that you have a given cid) every single block of data you have does not scale, This is because the number of DHT lookups and provide calls increases with the amount of data you are storing. The issues trying to scale this have been [documented](https://blog.ipfs.tech/2023-09-amino-refactoring/#making-reprovides-to-amino-lightning-fast). Compare this to bittorrent, which has been around much longer and has a much larger DHT, but where providing doesn't scale with the amount of data in a torrent and idle bandwidth usage is much lower. For this reason, we've made providing blocks in Nabu optional, and disabled it in Peergos (unless you are running a mirror).
This leads us to the next optimisation, enabled by only sending block requests to peers we think have the data. In Kubo, bitswap will broadcast block wants to all connected peers (typically in the 100s). This is both a privacy issue and a bandwidth hog as it means joining the main IPFS DHT is very resource intensive. Nabu has an option to block such aggressive peers that flood us with requests for blocks we don't have. With this option enabled, the incoming idle bandwidth usage is reduced by 10X.
### Benchmark
We benchmarked Nabu against a real-world dataset the Peergos PKI consisting of a [CHAMP](https://blog.acolyer.org/2015/11/27/hamt/) structure with six layers, 6000 blocks, and a total size of ~2 MiB. The results speak volumes: while standard Kubo took 120 seconds to retrieve this dataset using the pin command, Nabu accomplished the task in a mere 5 seconds. And, this was achieved without any significant optimization or parallelisation, leaving much room for further enhancement.
[<img src="../assets/nabu/nabu-speed.png" width="604" height="340"/>](../assets/nabu/nabu-speed.png)
## Compatibility
Ensuring seamless integration, we subjected Nabu to a suite of interoperability tests against all libp2p implementations, including go-libp2p, rust-libp2p, js-libp2p, and nim-libp2p across historical versions. The results of these tests are documented [here](https://github.com/libp2p/test-plans/actions/runs/5671451848/attempts/1#summary-15368587233). Some of the results are below.
[<img src="../assets/nabu/nabu-interop.png" width="808" height="340"/>](../assets/nabu/nabu-interop.png)
## Bringing Nabu to Life: Integration and Usage
Getting started with Nabu is simple. Choose between utilizing it through the HTTP API or embedding it directly into your process. Here's a compilable example of the embedding process in Java:
```java
List<MultiAddress> swarmAddresses = List.of(new MultiAddress("/ip6/::/tcp/4001"));
List<MultiAddress> bootstrapAddresses = List.of(new MultiAddress("/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa"));
BlockRequestAuthoriser authoriser = (cid, block, peerid, auth) -> CompletableFuture.completedFuture(true);
HostBuilder builder = new HostBuilder().generateIdentity();
PrivKey privKey = builder.getPrivateKey();
PeerId peerId = builder.getPeerId();
IdentitySection identity = new IdentitySection(privKey.bytes(), peerId);
boolean provideBlocks = true;
SocketAddress httpTarget = new InetSocketAddress("localhost", 10000);
Optional<HttpProtocol.HttpRequestProcessor> httpProxyTarget =
Optional.of((s, req, h) -> HttpProtocol.proxyRequest(req, httpTarget, h));
EmbeddedIpfs ipfs = EmbeddedIpfs.build(new RamRecordStore(),
new FileBlockstore(Path.of("/home/alice/ipfs")),
provideBlocks,
swarmAddresses,
bootstrapAddresses,
identity,
authoriser,
httpProxyTarget
);
ipfs.start();
List<Want> wants = List.of(new Want(Cid.decode("zdpuAwfJrGYtiGFDcSV3rDpaUrqCtQZRxMjdC6Eq9PNqLqTGg")));
Set<PeerId> retrieveFrom = Set.of(PeerId.fromBase58("QmVdFZgHnEgcedCS2G2ZNiEN59LuVrnRm7z3yXtEBv2XiF"));
boolean addToLocal = true;
List<HashedBlock> blocks = ipfs.getBlocks(wants, retrieveFrom, addToLocal);
byte[] data = blocks.get(0).block;
```
If you want a working example app you can fork, have a look at our [chat example](https://github.com/Peergos/nabu-chat). This is a simple CLI app where two users exchange peerid (out of band) and then connect and send messages via p2p http requests, which are printed to the console.
## Future plans
We still have lots planned for Nabu including the following:
* NAT traversal with circuit-relay-v2, dcutr and AutoRelay
* mDNS peer discovery
* Android compatibility and demo app
* Quic integration
## Gratitude and Acknowledgments
None of this would have been possible without the support of the [IPFS Implementations Fund](https://arcological.xyz/#ipfs-pool). We extend our heartfelt thanks for making this endeavor a reality.
## Experience Nabu Today!
We invite you to embark on an exploration of Nabu's capabilities. Feel free to give it a whirl, and we eagerly await your feedback and suggestions for improving Nabu. The easiest route is to open an issue on the github repo.
[Discover Nabu on GitHub](https://github.com/peergos/nabu) and unlock a world of decentralized possibilities.

View File

@@ -1,42 +0,0 @@
---
title: 'Connect with us in Istanbul and Prague'
description: 'Connect with the PL IPFS Implementers in Istanbul and Prague for DevConnect and DCxPrague! We want to hear from IPFS users to shape our 2024 plans.'
author: Cameron Wood
date: 2023-11-06
permalink: '/2023-11-connect-in-istanbul-and-prauge'
header_image: ''
tags:
- 'ipfs'
- 'kubo'
- 'helia'
- 'event'
- 'community'
---
Hello, IPFS enthusiasts and users! We want to connect with you and hear your thoughts as we shape the future of IPFS for 2024. Your input is invaluable in guiding our efforts, so we're inviting you to meet with us in Istanbul and Prague at two exciting events: DevConnect / IPFS Connect in Istanbul 🇹🇷 and DCxPrague in Prague 🇨🇿.
## 🌐 Who We Are: The PL IPFS Implementers and Network Infrastructure Operators
We are the PL IPFS implementers and network infrastructure operators, working on projects like Kubo, Helia, and managing the IPFS.io gateway. Our goal is to create a better IPFS ecosystem, and your insights are a crucial part of this journey.
## 👋 We Want to Hear from You
Your input matters! We would be thrilled to connect with as many of our current and prospective users as possible during these upcoming events. Your thoughts and experiences will help us understand your needs and use cases, ultimately guiding our plans for 2024.
## 👂 We're Eager to Listen
Are you planning to attend any of these events? If so, we would love to connect with you and learn more about your experiences with IPFS. Whether you have feedback, insights, or simply want to share your thoughts, we're all ears. Your feedback will help us figure out how to make the most of our time and resources for the IPFS community.
## ❓ How Can You Get Involved?
If you're interested in sharing your thoughts and connecting with us during these events, please fill out [this form](https://forms.gle/CxUQPsEUg2CGkLgh6). We're eager to schedule time to meet with you to discuss your current IPFS challenges, needs, and hopes.
<br />
<a href="https://forms.gle/CxUQPsEUg2CGkLgh6" class="cta-button"> Fill out this form to connect</a>
🙏 Thank You!

View File

@@ -1,95 +0,0 @@
---
title: dAppling - a New Way to Deploy IPFS Sites in Minutes
description: Introducing a seamless way to launch your code on IPFS, featuring straightforward setup, automatic deployments, and more.
author: 🙏 namaskar
date: 2023-11-28
permalink: '/2023-11-dappling/'
header_image: '/2023-12-introducing-dappling-header.png'
tags:
- 'web3'
- 'tooling'
- 'ipns'
---
Welcome! I would love to share what I'm building at dAppling, a platform that aims to simplify the build and deployment process for sites hosted on IPFS. I'll share a bit about us, a bit about the platform, and a bit about what you will get. By the end, it should be clear if dAppling is a tool you'll want to add to your developer toolbox.
## A Bit about Us
I'm Kyle. My co-founder Russell and I have been professional developers (whatever that means) for the last 7 years. We've worked at startups, big tech, and things in between. The last 2 of those years has been in the web3 space; started with the creation of a DeFi protocol. We're excited to now be building tools for developers working on the next generation of the web.
## A Bit about dAppling
The first of those tools is dAppling. The word is a portmanteau of "dApp", a term short for decentralized application, and "sapling," because nature is wonderful 🌱. However, we support all kinds of web projects, not just [dApps](https://app.gogopool.com.dappling.eth.limo/): [landing pages](https://arbor-landing.dappling.eth.limo/), [blogs](https://blog.dappling.network), or even a simple page of content arguing against the [usage of acronyms](https://nomoreacronyms-xczmz4.dappling.org).
Basically, we fetch your code, build it into html/css/js files, and host those files on IPFS. What makes us special are the features we provide to make your experience easier. Even if you have an existing site, you can use dAppling to create a resilient "alternative frontend" that is hosted on IPFS.
## A Bit about What You Get
When you add a project dAppling, you will tell us where the code is and what commands to use. After it's built you will get:
- automatic updates when your code on **GitHub** changes
- hosting on the **InterPlanetary File System** (IPFS)
- a working **dappling.network** subdomain
- a working **dappling.eth** ENS subdomain
- an automatically updating **IPNS** key
## Our Focuses
We have two major focuses at dAppling: **simplicity** and **access**.
We want to make it as easy as possible to get your code hosted. After that, we want it to be accessible and fast. What we want to avoid is a first-time experience where you only see an error screen or have your users waiting forever to load your site.
### Simplicity
We simplify the setup process by automatically detecting your app's configuration. If something does go wrong, we have easy to use debugging tools.
#### Simple Setup
Since we have access to your code, we look at a few things like what package manager you use, what sort of framework the project is built with, and certain configuration files. We use this information to prefill the configuration form, so you don't have to.
We have support for environment variables to use during the build process that can be used to configure things like your database URL. Additionally, we support monorepos.
![Autodetect Configuration](../assets/2023-12-introducing-dappling-autodetect.png)
#### Simple Debugging
Try as we might, projects fail to build. Quite a bit! From a linting error to a missing dependency, seeing the error screen seems inevitable. We want to make it as easy as possible to understand what went wrong and how to fix it. We parse the logs and show you the error in, what I think, is a pretty readable format.
![Readable Error Logs](../assets/2023-12-introducing-dappling-error.png)
If reading logs isn't your thing, we have a button that sends your logs to be parsed by AI and returns a summary of the error. And while it's not perfect, the output has been helpful more often than not.
### Accessibility
Websites need to be accessed, even if the reader is only you! We think the more points of access the better, and each should be available and fast.
#### Speed of Access
The foundation of our storage starts with [Filebase](https://filebase.com/) whose geo-redundant storage locations keep your files available. On top of that, the CDN quickly fetches and caches those files.
#### Points of Access
There are a couple of ways to access your site. When the code is built and uploaded to IPFS, you will receive what is called a [Content Identifier (CID)](https://docs.ipfs.tech/concepts/content-addressing/). It's basically the hash of all your files.
You will receive a new CID every time your site is re-built because the resulting files have changed. Luckily, we use the [InterPlanetary Name System (IPNS)](https://docs.ipfs.tech/concepts/ipns/) to create a key that will always point to the most recent CID.
So the most straightforward way to fetch your content would be directly from an [IPFS node](https://docs.ipfs.tech/concepts/nodes/). Since not everyone is running an IPFS node (yet), you can instead use an [IPFS gateway](https://docs.ipfs.tech/concepts/ipfs-gateway/) in which a third party fetches the content from their node and serves it over HTTPS.
Since we store the on our `dappling.eth` ENS name, you can also fetch the content through a service like [eth.limo](https://eth.limo). This service first reads the IPNS key that we set, resolves it to a CID, and then serves the content like a gateway.
Even simpler would be using the existing DNS system either using our custom `*.dappling.network` subdomain that we created for you. We also allow adding your custom domain like `ipfs.crypto-protocol.app`.
## Future
We plan to be constantly upgrading the platform as new decentralization techniques appear. As a user, you will notice more points of access, quicker speeds, and features to make usage easier. We hope to increase decentralization
- SSR: Serverless applications are popular on platforms like Next.js and we will be using decentralized compute to increase the types of applications we support.
- Collaboration: The more participants in a project the better the decentralizaton becomes. We are working on tools to allow multiple people configure the project.
## Get Involved
As we continue to improve, we're always looking for user feedback to guide us. Our focus remains on providing a platform that is not just decentralized but also highly performant and user-friendly.
If you run into **any** problems, want to connect, or just say hi, my DMs are open on [𝕏](https://x.com/0xBookland). I would love to hear your feedback and help you get all of your projects deployed as we transition to the infrastructure of the future.
🙏

View File

@@ -1,195 +0,0 @@
---
title: IPFS URL support in CURL
description: 'CURL 8.4.0 shipped with built-in support for ipfs:// and ipns:// addresses.'
author: Mark Gaiser
date: 2023-10-16
permalink: '/ipfs-uri-support-in-curl/'
header_image: '/curl.png'
tags:
- 'community'
- 'URI'
- 'URL'
- 'HTTP'
- 'curl'
---
# `ipfs://` URL support in `curl`
[CURL 8.4.0](https://github.com/curl/curl/releases/tag/curl-8_4_0) shipped with built-in support for `ipfs://` and `ipns://` addresses.
This enables `curl` to seamlessly integrate with the user's preferred [IPFS gateway](https://docs.ipfs.tech/reference/http/gateway/) through the `IPFS_GATEWAY` environment variable or a `gateway` file. Best of all, these capabilities are available for immediate use today:
```bash
$ export IPFS_GATEWAY="http://127.0.0.1:8080" # local (trusted) gateway provided by ipfs daemon like Kubo
$ curl ipfs://bafkreih3wifdszgljcae7eu2qtpbgaedfkcvgnh4liq7rturr2crqlsuey
hello from IPFS
```
In this blog post, we will:
- explore the journey of implementing IPFS URI support in CURL,
- delve into the mechanics of [how CURL locates an IPFS gateway](#how-does-curl-find-an-ipfs-gateway),
- learn how to be immune to [malicious gateways](#malicious-gateways-and-data-integrity),
- and finally, provide [practical CURL examples](#curl-examples) for leveraging IPFS URLs for either deserialized or verifiable responses.
## A brief history
Supporting IPFS in CURL has been attempted [before](https://github.com/curl/curl/pull/8468) as a CURL library feature. Some discussions lead to a belief that this should be implemented in the CURL tool itself, not its library. A renewed [implementation attempt](https://github.com/curl/curl/pull/8805) took the tool-side approach which ultimately was accepted and is available right now in CURL 8.4.0!
The support of IPFS in CURL is effectively consisting of two implementation details.
1. CURL tries to find a locally installed or [configured gateway](#how-does-curl-find-an-ipfs-gateway).
2. It then rewrites an `ipfs://bafybeigagd5nmnn2iys2f3doro7ydrevyr2mzarwidgadawmamiteydbzi` to a gateway URL. This is how curl handles it internally, you see nothing of this URL rewriting.
If you have IPFS installed locally then running `curl ipfs://` will Just Work™. If not, CURL will return an error with details about how to set up the gateway preference. This ensures the user agency is respected, no third-party gateway is used as implicit default.
## Why `ipfs://` URL support is so important?
Why isn't `https://ipfs.io/ipfs/bafybeigagd5nmnn2iys2f3doro7ydrevyr2mzarwidgadawmamiteydbzi` equally acceptable?
Or why isn't a local URL `http://localhost:8080/ipfs/bafybeigagd5nmnn2iys2f3doro7ydrevyr2mzarwidgadawmamiteydbzi` fine?
Both addresses are tied to a specific _location_.
IPFS is a modular suite of protocols purpose built for the organization and transfer of [content-addressed](https://docs.ipfs.tech/concepts/content-addressing) data. It shouldn't matter where the content is. Content Identifier ([CID](https://docs.ipfs.tech/concepts/glossary/#cid)) is all that is required. The "where" part is implementation detail an IPFS system takes care of. Hardcoding a location in addition to a CID (like a specific HTTP gateway) limits end users to IPFS resources available through that one specific, centralized point of entry.
If we pull the URL apart we see:
![](../assets/ipfs_uri_where_protocol_what.png)
Users of the IPFS system should not care about the _where_ part, nor be coerced to use a specific, hard-coded entry point into the system.
Public gateways like `ipfs.io` are always owned by some entity and could get censored or shut down at any time. Many gateways will not allow playback of deserialized videos or only respond to CIDs from allowlists to reduce costs. Other gateways will block specific CIDs from resolving in specific jurisdictions for legal reasons. Community-run public gateways will have limits and throttle usage.
These are not limitations of IPFS but purely a limitation a specific gateway has set through custom configuration. IPFS user should always have ability to avoid such limitations if they choose to self-host and [run their own IPFS node with a local gateway](https://docs.ipfs.tech/install/).
<!-- TODO: remove? feels like duplicate of we already say in this and "malicious" sections, but mentioning ffmpeg blogpost feels like something we should keep somewhere
This is why running a local node (and therefore a local gateway, it's part of a node) is so important. Even though you still effectively use `http://localhost:8080` as gateway, it's hosted by you locally backed by the many peers your node is connected with. Your experience in using IPFS is going to be best and fastest with a local node. Even when your local gateway isn't working it's easy for you to restart your node and get that gateway back and running. You can't do that on public gateways that you don't control.
One of the many reasons why we're putting in the effort to make applications recognize IPFS URIs (like [ffmpeg](https://blog.ipfs.tech/2022-08-01-ipfs-and-ffmpeg/)) `ipfs://bafybeigagd5nmnn2iys2f3doro7ydrevyr2mzarwidgadawmamiteydbzi` is to let the application in the background find that gateway you're running and giving you the freedom of being truly distributed! This also allows url's to be shared as IPFS url's (like `ipfs://bafybeigagd5nmnn2iys2f3doro7ydrevyr2mzarwidgadawmamiteydbzi`) without any trace of a (central) gateway and bring us one step closer to a distributed world where it doesn't matter anymore where that data is located.
-->
## How does CURL find an IPFS Gateway?
Any IPFS implementation that has support for [IPIP-280](https://github.com/ipfs/specs/pull/280) exposes an IPFS gateway that CURL (and [ffmpeg](https://blog.ipfs.tech/2022-08-01-ipfs-and-ffmpeg/)) can use. At the moment of writing that's just [Kubo](https://github.com/ipfs/kubo/releases).
CURL 8.4.0 and greater looks for a gateway in the following order:
1. `IPFS_GATEWAY`, if set it's used.
2. The `--ipfs-gateway` CLI argument.
3. The `~/.ipfs/gateway` file, where it reads the first line.
If a gateway hint is found at any of those places, and if that is a valid HTTP URL, then CURL will use it. If not, then you'll be getting an error message pointing to the [CURL documentation related to IPFS](https://curl.se/docs/ipfs.html) to help you further.
One can specify any IPFS gateway that is in compliance with [Gateway Specifications](https://specs.ipfs.tech/http-gateways/). It is highly recommended to use a local gateway, as it provides the best security guarantees.
## Malicious gateways and data integrity?
Requesting deserialized responses and delegating hash verification to a third-party gateway comes with risks. It is possible that a public gateway is malicious. Or, that a well-known and respected gateway gets hacked and changed to return payload that does not match requested CID. How can one protect themselves against that?
If deserialized responses are necessary, one should run their own gateway in a local, controlled environment. Every block of data retrieved though self-hosted IPFS gateway is verified to match the hash from CID. For the maximum flexibility and security, find an implementation that provides the gateway endpoint (i.e. [Kubo](https://docs.ipfs.tech/install/command-line/)) and run it yourself!
When using a third-party gateway that one can't fully trust, the only secure option is to [request verifiable response types](https://docs.ipfs.tech/reference/http/gateway/#trustless-verifiable-retrieval) such as [application/vnd.ipld.raw](https://www.iana.org/assignments/media-types/application/vnd.ipld.raw) (a single block) or [application/vnd.ipld.car](https://www.iana.org/assignments/media-types/application/vnd.ipld.car) (multiple blocks in CAR archive). Both allow to locally verify if the data returned by gateway match the requested CID, removing the surface for [Man-in-the-middle attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack).
## CURL Examples
### Deserialized responses
::: callout
By default, a trusted local gateway acts as a bridge between traditional HTTP clients and IPFS.
It performs necessary hash verification, UnixFS _deserialization_ and return reassembled files to the client, as if they were stored in a traditional HTTP server. This means all validation happens on the gateway, and clients trust that the gateway is correctly validating content-addressed data before returning it to them.
:::
#### Downloading a file from IPFS with CURL
```bash
$ curl ipfs://bafkreih3wifdszgljcae7eu2qtpbgaedfkcvgnh4liq7rturr2crqlsuey -o out.txt
```
If curl responds with `curl: IPFS automatic gateway detection failure`, make sure `IPFS_GATEWAY` is set (see examples below).
#### Explicitly specifying a gateway
To use local gateway on custom port 48080:
```bash
$ export IPFS_GATEWAY=http://127.0.0.1:48080
$ curl ipfs://bafkreih3wifdszgljcae7eu2qtpbgaedfkcvgnh4liq7rturr2crqlsuey
hello from IPFS
```
When setting environment variable is not feasible, one can use `--ipfs-gateway` instead:
```bash
$ curl --ipfs-gateway http://127.0.0.1:48080 ipfs://bafkreih3wifdszgljcae7eu2qtpbgaedfkcvgnh4liq7rturr2crqlsuey
hello from IPFS
```
#### Following subdomain redirects
::: callout
By default, the URL resolution in `curl` does not follow HTTP redirects and assumes the endpoint implements deserializing [path gateway](https://specs.ipfs.tech/http-gateways/path-gateway/), or at the very least, the [trustless gateway](https://specs.ipfs.tech/http-gateways/trustless-gateway/).
When pointing `curl` at a [subdomain gateway](https://specs.ipfs.tech/http-gateways/subdomain-gateway) (like `https://dweb.link` or the `http://localhost:8080` provided by a [local Kubo node](https://docs.ipfs.tech/how-to/command-line-quick-start/)) one has to pass `-L` in the curl command to follow the redirect.
:::
```bash
$ IPFS_GATEWAY=https://localhost:8080 curl -s -L ipfs://bafkreih3wifdszgljcae7eu2qtpbgaedfkcvgnh4liq7rturr2crqlsuey
hello from IPFS
```
#### Piping and streaming responses
Deserialized response returned by CURL can be piped directly to a video player:
```
$ curl ipfs://bafybeigagd5nmnn2iys2f3doro7ydrevyr2mzarwidgadawmamiteydbzi | ffplay -
```
### Verifiable responses
::: callout
By explicitly requesting [application/vnd.ipld.raw](https://www.iana.org/assignments/media-types/application/vnd.ipld.raw) (a block) or [application/vnd.ipld.car](https://www.iana.org/assignments/media-types/application/vnd.ipld.car) (a stream of blocks) responses, by means defined in [Trustless Gateway Specification](https://specs.ipfs.tech/http-gateways/trustless-gateway/), the user is able to fetch raw content-addressed data and [perform hash verification themselves](https://docs.ipfs.tech/reference/http/gateway/#trustless-verifiable-retrieval).
:::
#### Fetching and verifying a directory from an untrusted gateway
Requesting [trustless and verifiable](https://docs.ipfs.tech/reference/http/gateway/#trustless-verifiable-retrieval) CAR response via `Accept` HTTP header:
```bash
$ export IPFS_GATEWAY="https://ipfs.io" # using untrusted public gateway
$ curl -H "Accept: application/vnd.ipld.car" "ipfs://bafybeiakou6e7hnx4ms2yangplzl6viapsoyo6phlee6bwrg4j2xt37m3q" > dag.car
```
Then, CAR can be moved around and imported into some other IPFS node:
```bash
$ ipfs dag import dag.car
```
or verified and unpacked locally, without having to run a full IPFS node, with tools like [go-car](https://github.com/ipld/go-car/tree/master/cmd/car#readme) or [ipfs-car](https://www.npmjs.com/package/ipfs-car):
```
$ npm i -g ipfs-car
$ ipfs-car unpack dag.car --output dag.out
$ ls dag.out
1007 - Sustainable - alt.txt
1007 - Sustainable - transcript.txt
1007 - Sustainable.png
```
## What's next?
More places supporting IPFS addresses. Everyone can integrate `ipfs://` and `ipns://` URL support into their application. See specifications proposed in [IPIP-280](https://github.com/ipfs/specs/pull/280) for technical details. We are [tracking potential project](https://github.com/ipfs/integrations/issues) where an integration makes sense! If you feel up to the challenge, don't hesitate to drop a comment in one of the [potential projects](https://github.com/ipfs/integrations/issues) for IPFS URL integration or find us on:
* [Matrix](https://matrix.to/#/#ipfs-space:ipfs.io), [Discord](https://discord.com/invite/ipfs) or [Slack](https://filecoin.io/slack)
* [Discussion Forum](https://discuss.ipfs.tech/)
Or one of the other many places where the [IPFS community](https://docs.ipfs.tech/community/) is active.

View File

@@ -1,199 +0,0 @@
---
title: Introducing Major Improvements to Omnilingo
description: 'Were happy to introduce some major improvements to Omnilingo, the decentralised language learning platform designed with special attention to small and marginalised language communities.'
date: 2023-11-20
permalink: '/major-improvements-to-omnilingo/'
header_image: "/omnilingo-x-ipfs.jpg"
tags:
- omnilingo
---
## Introduction
Nearly two years ago, the IPFS Dev Grants program funded the first grant for Omnilingo to explore how IPFS could meet the needs of their users - groups with limited bandwidth and applications which work offline-first, allowing full user control of data. You can read the [original post from 2021](https://blog.ipfs.tech/2021-12-17-omnilingo/), and several iterations of the grant later (generously provided by the Filecoin Foundation) we're happy to share an update.
The mission of Omnilingo is inspiring, and its authors are an incredible team who are pushing on a lot of hard problems all at once, including new approaches to consent-driven data access and revocation patterns. This is critical work and an extraordinarily important use of IPFS that we are happy to shine a light on.
— Dietrich Ayala, technical grant advisor to Omnilingo
## Project Update: Omnilingo
We're happy to introduce some major improvements to Omnilingo, the decentralised language learning platform designed with special attention to small and marginalised language communities. We now have an experimental
contribution system, including an encryption-based consent model.
## Overview
We developed Omnilingo two years ago with the goal of making it possible for minority and marginalised language communities to create and curate language-learning data in their languages by developing and publishing formats for language source material hosted on the decentralised filesystem IPFS. Anyone can publish new source material on IPFS, and a compatible Omnilingo client can use this source material to generate language-learning exercises.
The source material is published in the form of Omnilingo data structures on IPFS; previously this had to be done by a knowledgeable web developer operating an IPFS node. We are happy to present now an interface for contributing samples from our demonstration web client!
As with any networked system, collecting and preserving data from our users can be done only with their consent. Managing that consent within the context of a decentralised filesystem comes with its own special challenges, and we designed what we think is as good of a privacy- and consent-respecting system as we can.
Here's a sample user story illustrating how this might be used:
A language activist encourages members of their endangered language community to contribute their voices, producing a large corpus of spoken audio clips; children of their community and in diaspora can now use Omnilingo to practise outside of the classroom, supporting revitalisation of the language. Decentralisation and the consent system allow the community as a whole as well as individuals to decide who has access to their voices.
As opposed to most current systems for data collection via crowd sourcing, in Omnilingo, contributors own their own data and can define their own terms and conditions for its use.
## Omnilingo privacy structures
Our contribution privacy initiative brings with it a handful of new structures. These are introduced bottom-up; read this section backwards if you prefer a top-down introduction.
### Omnilingo session keys
An Omnilingo session key is a [JSON Web Key]; our implementation uses the [SubtleCrypto WebAPI] to generate and encode these keys. Currently we recommend only 256-bit AES-GCM keys, and our Web client supports only this configuration.
[JSON Web Key]: https://datatracker.ietf.org/doc/html/rfc7517
[SubtleCrypto WebAPI]: https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto
Omnilingo session keys form the unit of "consent": for a given session key, users may have contributed several samples. If a user wishes to revoke their consent for a sample, they signal this by unpublishing the session key, thus revoking consent for all samples contributed with that key.
For a more positive user experience, we recommend the user-facing interface reference session keys by the [pgpfone wordlist] encoding of their fingerprint.
[pgpfone wordlist]: https://web.archive.org/web/20100326141145/http://web.mit.edu/network/pgpfone/manual/index.html#PGP000062
### Omnilingo encrypted object
An Omnilingo encrypted object is an object which has been encrypted by an Omnilingo session key; the structure is:
```
{ "alg": alg // AesKeyGenParams
, "keyfpr": keyfpr // key fingerprint: hexadecimal string encoding of the SHA-1 digest of the key
, "iv": iv // initialisation vector used
, "encdata": encdata // Uint8Array of the encrypted data
}
```
See [MDN SubtleCrypto digest documentation] for details of how we generate the fingerprint.
[MDN SubtleCrypto digest documentation]: https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/digest
We wrap in encrypted objects the MP3 of the contribution as well as the list of Omnilingo clip structures.
Encrypted clip:
```
{ "chars_sec": chars_sec
, "clip_cid": CID(encrypt(clip_mp3))
, "length": length
, "meta_cid": meta_cid
, "sentence_cid": sentence_cid
}
```
### Omnilingo encrypted index
An Omnilingo encrypted index is similar to the classic Omnilingo root index: a JSON dictionary with language codes as keys and Omnilingo language indices as the values. The `cids` entry of the Omnilingo language index is a list of IPFS CIDs referencing the encrypted lists of Omnilingo clip structures.
An example:
```
{ "ab": { "cids": CID(encrypt(clip_list)) } }
```
### Omnilingo encrypted root
An Omnilingo encrypted root is a JSON dictionary; the keys are fingerprints of Omnilingo session keys, and each value is the CID of an Omnilingo encrypted index encrypted with the corresponding session key.
```
{ "ea6b0c9b2f697c3cbc16fb7978af16aae53bdeb8": "QmdzHipTQWgguLci211Cp3Eh8SWhEnsZA34mGJgGQXYcUV" }
```
Encrypted roots can optionally contain some of the referenced session keys, allowing decryption. In this example, the key `ea6b0c9b...` is included.
```
{ "keys": {
"ea6b0c9b2f697c3cbc16fb7978af16aae53bdeb8": JWK(key)
}
, "dab24db69f6856652275e06c5f092f68623a4041": "QmWug9ie3bpkzVvKDVfuLtksaWsa5Q1DZxsnwmCCAASYj8"
, "ea6b0c9b2f697c3cbc16fb7978af16aae53bdeb8": "QmdzHipTQWgguLci211Cp3Eh8SWhEnsZA34mGJgGQXYcUV"
}
```
### Omnilingo identity
An Omnilingo identity is a IPNS key (colloquially referred to as a `k5`). Published to this `k5` is an encrypted root, containing the session keys for which the user (the one controlling the private part of the `k5`). The Omnilingo client has been updated to accept Omnilingo identities, fetching and decrypting the contained encrypted indices.
In the example encrypted root:
```
{ "keys":{
"ea6b0c9b2f697c3cbc16fb7978af16aae53bdeb8": JWK(key)
}
, "dab24db69f6856652275e06c5f092f68623a4041": "QmWug9ie3bpkzVvKDVfuLtksaWsa5Q1DZxsnwmCCAASYj8"
, "ea6b0c9b2f697c3cbc16fb7978af16aae53bdeb8": "QmdzHipTQWgguLci211Cp3Eh8SWhEnsZA34mGJgGQXYcUV"
}
```
The material encrypted by session key `ea6b0c9b2` can be used with the controlling user's consent, whereas the material encrypted by session key `dab24db6` cannot be any longer, as the user has unpublished the key.
## Data flows
There are two new data flows introduced with this system: contributing data, and retrieving contributed data.
### Contribution
A contributor client will be drawing sentences from a (presumably classic) Omnilingo language index, and contributing new clips. They start by generating an Omnilingo identity (`k5`) and a session key. The session key is stored locally.
When the user makes their first contribution (an MP3 recording of them reading a sentence), a new Omnilingo encrypted root index is published to their `k5`:
```
{ "keys": {
fpr(key): JWK(key)
}
, fpr(key): CID({ // encrypted language index
"XX": {
"cids": [CID(encrypt([ // encrypted clip list
encrypted_clip
]))]
}
})
}
```
As the user makes more contributions, the encrypted clip list grows in length, updating the encrypted language index and encrypted root index, each time republished to the `k5`, all under the same session key:
```
{ "keys": {
fpr(key): JWK(key)
}
, fpr(key): CID({ "XX": { "cids": [CID(encrypt(clip_list))] } })
}
```
At some point, the user decides to "roll" their session key, creating a new session. (A client might decide to do this automatically, e.g. each time it is opened, or each time the language is switched.) A new session key is generated, and everything propagates up to the user identity (`k5`):
```
{ "keys": {
fpr(key1): JWK(key1)
, fpr(key2): JWK(key2)
}
, fpr(key1): CID({ "XX": { "cids": [CID(encrypt(clip_list1))] } })
, fpr(key2): CID({ "XX": { "cids": [CID(encrypt(clip_list2))] } })
}
```
At some later time, the user decides to revoke consent to use the material recorded under `key1`; the JSON Web Key encoded copy of `key1` is removed, only `fpr(key1)` remains published under their identity:
```
{ "keys": {
fpr(key2): JWK(key2)
}
, fpr(key1): CID({ "XX": { "cids": [CID(encrypt(clip_list1))] } }) // consent revoked
, fpr(key2): CID({ "XX": { "cids": [CID(encrypt(clip_list2))] } })
}
```
Consumers who have stored `key1` will retain access to this data, just as they would if they had stored the decrypted copies; however, use of it would constitute a violation of the user's consent.
### Consumption
Omnilingo consumers now have two types of root indices to deal with: classic root indices and encrypted root indices. An encrypted root index may be detected by the presence of the `keys` field; iterating over this dictionary then gives the consumer a list of fingerprints to look up in the encrypted root index, as well as the key needed to decode the resulting encrypted language index.
## Concluding remarks
Omnilingo now has support for user contributions with sovereignty protections, enabling marginalised language communities to produce and control their own data and integrate it into compatible Omnilingo clients in a user-respecting way. Due to the decentralisation allowed by IPFS, such clients can be hosted anywhere on anyone's infrastructure. We look forward to continuing to improve language learner and language activist access to decentralised and sovereignty-preserving language learning systems.
We invite everyone interested to get involved! Read our [technical paper](https://arxiv.org/abs/2310.06764), check out our [live demo](https://demo.omnilingo.cc), [fork us on GitHub](https://github.com/omnilingo/omnilingo), and join us on Matrix in `#OmniLingo:matrix.org` ([chat now](https://app.element.io/#/room/#OmniLingo:matrix.org)). Our near-term plans include:
* full p2p (dropping the required remote Kubo instance)
* experimenting with isolated networks (useful e.g. for rural communities)
* integration with FileCoin and/or pinning services

View File

@@ -4,14 +4,6 @@ type: News coverage
sitemap:
exclude: true
data:
- title: Advancing IPFS and libp2p Governance
date: 2023-11-14
publish_date:
path: https://protocol.ai/blog/advancing-ipfs-and-libp2p-governance/
tags:
- IPFS
- libp2p
- governance
- title: Brave announces automatic NFT backups and enhanced IPFS/Filecoin support in Brave Wallet
date: 2023-05-02
publish_date:

View File

@@ -1,78 +0,0 @@
---
title: Welcome to IPFS News 198!
description: Featuring announcements about Brave's New IPFS Infobar, Amino, and IPFS Connect!
date: 2023-10-03
permalink: "/newsletter-198"
header_image: "/ipfsnews.png"
tags:
- newsletter
---
## **IPFS Connect 2023 Istanbul 🔭**
IPFS Connect is a community-run regional conference bringing together all of the builders and ecosystems that rely on and use IPFS as the most widely used decentralized content addressing protocol for files and data. This year's event is happening alongside Devconnect and LabWeek23 in Istanbul, Turkey on November 16. Join the IPFS Community for a full day of workshops, lightning talks, and demos showcasing technology, tools, and innovative projects in the IPFS ecosystem.
There are several opportunities for you to get involved with this event whether you're a business, organization, or individual.
<a href="https://blog.ipfs.tech/_2023-ipfs-connect-istanbul/" class="cta-button">Read the blog post</a>
## **Brand New on IPFS ✨**
[Brave Browser's New IPFS Infobar](https://blog.ipfs.tech/_2023-brave-infobar/)
- Were excited to share a new IPFS-related feature that appears in the most recent version of Braves web browser. A new IPFS Infobar will appear at the top of the browser when you visit an IPFS compatible resource such as a CID on a public gateway or a website with a DNSLink. [Learn more here!](https://blog.ipfs.tech/_2023-brave-infobar/)
[IPFS support was merged into curl](https://twitter.com/bmann/status/1705572964068930010?s=20)
- Thanks to the hard work and dedication of [Mark Gaiser](https://github.com/markg85), IPFS support was recently [merged into curl](https://github.com/curl/curl/pull/8805#issuecomment-1732260385), a command line tool and library for transferring data with URL syntax. More information and an official announcement are to come, but we're excited for this important milestone. IPFS is already in the curl documentation: [https://curl.se/docs/ipfs.html](https://curl.se/docs/ipfs.html)
[Amino (the Public IPFS DHT) is getting a facelift](https://blog.ipfs.tech/2023-09-amino-refactoring/)
- [Read the blog post](https://blog.ipfs.tech/2023-09-amino-refactoring/) to learn all the details and follow this discussion forum thread if you want to be kept up-to-date about further developments: [https://discuss.ipfs.tech/t/dht-discussion-and-contribution-opportunities-in-2023q4/16937/2](https://discuss.ipfs.tech/t/dht-discussion-and-contribution-opportunities-in-2023q4/16937/2)
[The ProbeLab team needs your help — fill out this survey!](https://tally.so/r/npoo6q)
- The ProbeLab team developed tools and infrastructure to capture the metrics you see at [https://probelab.io/](https://probelab.io). We want to expand the list of metrics we capture and build new open-source tools that will help protocol designers and application developers get a better idea of where the performance of their application can improve. This is your chance to influence where our team focuses next. [Please fill in the survey and let us know if and how you would be interested to contribute to this line of work.](https://tally.so/r/npoo6q)
[awesome-ipfs reboot](https://awesome.ipfs.tech/)
- After lying dormant for many months, the awesome-ipfs website has been cleaned up and rebooted. [Check out the updated version here!](https://awesome.ipfs.tech/)
[IPFS & Filecoin Ecosystem Roundup](https://www.youtube.com/watch?v=bdOPPnuZnhw)
- The September Filecoin & IPFS Ecosystem Roundup is online! Check out the video for the latest updates, developments, and insights straight from the community. [Watch it here!](https://www.youtube.com/watch?v=bdOPPnuZnhw)
## **Around the Ecosystem 🌎**
[IPFS on AWS, Part 1 Discover IPFS on a virtual machine](https://aws.amazon.com/blogs/database/part-1-ipfs-on-aws-discover-ipfs-on-a-virtual-machine/)
- Did you know you can run IPFS on AWS? In this 3-part series on the AWS Database Blog, you'll learn how to do it thanks to a step-by-step guide. [Check it out!](https://aws.amazon.com/blogs/database/part-1-ipfs-on-aws-discover-ipfs-on-a-virtual-machine/)
[OrbitDB v1.0 releases](https://github.com/orbitdb/orbitdb)
- "OrbitDB is a serverless, distributed, peer-to-peer database. OrbitDB uses IPFS as its data storage and Libp2p Pubsub to automatically sync databases with peers. It's an eventually consistent database that uses Merkle-CRDTs for conflict-free database writes and merges making OrbitDB an excellent choice for p2p and decentralized apps, blockchain applications and local-first web applications." [Learn more here!](https://github.com/orbitdb/orbitdb)
[New in the Ecoystem Directory: dAppling](https://ecosystem.ipfs.tech/project/dappling/)
- Easy way for web3 developers to deploy their frontend to IPFS with a great developer experience. Connect your github and have a deployed site in a few clicks. Automatic CI/CD / Preview Builds / ENS support. [Check it out here!](https://ecosystem.ipfs.tech/project/dappling/)
[New in the Ecosystem Directory: ODD SDK](https://ecosystem.ipfs.tech/project/odd-sdk/)
- ODD SDK is Fission's true local-first, edge computing stack. ODD SDK empowers you to build fully distributed web applications with auth and storage without needing a complex backend. [View it here!](https://ecosystem.ipfs.tech/project/odd-sdk/)
[Popular on the Forums: Questions about a Private IPFS Setup](https://discuss.ipfs.tech/t/how-to-set-up-my-own-bootstrap-nodes-to-enable-discovery-and-connection-between-nodes-with-public-ip-and-nodes-on-a-local-network/16910)
- "How [do I] set up my own bootstrap nodes to enable discovery and connection between nodes with public IP and nodes on a local network?" [Read the discussion.](https://discuss.ipfs.tech/t/how-to-set-up-my-own-bootstrap-nodes-to-enable-discovery-and-connection-between-nodes-with-public-ip-and-nodes-on-a-local-network/16910)
[Job Alert: Filebase is hiring a Senior Digital Marketing Strategist](https://wellfound.com/jobs/2807523-senior-digital-marketing-strategist)
- "Are you a creative and strategic thinker with a passion for driving digital marketing excellence? Do you thrive in dynamic, cutting-edge environments and have a deep understanding of the tech industry? Join us at Filebase, a leading player in the decentralized storage revolution, as a Senior Digital Marketing Strategist." [Learn more here!](https://wellfound.com/jobs/2807523-senior-digital-marketing-strategist)
[LabWeek23 is happening November 13-17](https://23.labweek.io/)
- Have you booked your travel yet? LabWeek23 is happening in Istanbul, Türkiye, from November 13-17, alongside Devconnect! This is your chance to connect and collaborate with visionaries and teams that are domain leaders in ZK Proofs, AI and blockchain, DeSci, decentralized storage, gaming in Web3, public goods funding, cryptoeconomics, and much more. [Learn more about it here!](https://23.labweek.io/)
## **Have something you want featured? 📥**
As part of our ongoing efforts to empower and promote community contributors, we're providing a new way for you to have a chance to influence the monthly IPFS newsletter! If you have something exciting or important that you think the IPFS community should know about, then you can [submit this form](https://airtable.com/appjqlMYucNiOYHl7/shrfPrKe112FW3ucv) to have it be considered for promotion via IPFS communication channels.

View File

@@ -1,91 +0,0 @@
---
title: Welcome to IPFS News 199!
description: Featuring CURL supporting IPFS and a new IPFS implementation called Nabu.
date: 2023-11-09
permalink: "/newsletter-199"
header_image: "/ipfsnews.png"
tags:
- newsletter
---
## **IPFS URL support in CURL 🔭**
We're excited to share that thanks to the hard work of Mark Gaiser, CURL 8.4.0 shipped with built-in support for ipfs:// and ipns:// addresses. This is an important advancement, and we've got a blog you can read to learn more:
<a href="https://blog.ipfs.tech/ipfs-uri-support-in-curl/" class="cta-button">Read the blog post</a>
## **Brand New on IPFS ✨**
[Introducing Nabu: Unleashing IPFS on the JVM](https://blog.ipfs.tech/2023-11-introducing-nabu/)
- Learn about a new fast IPFS implementation in Java by checking out this recent post on the IPFS blog. [Read it here!](https://blog.ipfs.tech/2023-11-introducing-nabu/)
[IPFS Connect Istanbul](https://istanbul2023.ipfsconnect.org/)
- IPFS Connect is a community-run regional conference bringing together all of the builders and ecosystems that rely on and use IPFS as the most widely used decentralized content addressing protocol for files and data. This year's event is happening alongside Devconnect and LabWeek23 in Istanbul, Turkey on November 16. [Register today!](https://istanbul2023.ipfsconnect.org/)
[Connect with the PL IPFS Implementers in Istanbul and Prague](https://forms.gle/CxUQPsEUg2CGkLgh6)
- We want to connect with you and hear your thoughts as we shape the future of IPFS for 2024. Your input is invaluable in guiding our efforts, so we're inviting you to meet with us in Istanbul and Prague at two exciting events: DevConnect / IPFS Connect in Istanbul 🇹🇷 and DCxPrague in Prague 🇨🇿. If you're interested in sharing your thoughts and connecting with us during these events, [please fill out this form.](https://forms.gle/CxUQPsEUg2CGkLgh6)
[New Release: Kubo v0.24.0](https://github.com/ipfs/kubo/releases/tag/v0.24.0)
- Support for content blocking
- Gateway: the root of the CARs are no longer meaningful
- IPNS: improved publishing defaults
- IPNS: record TTL is used for caching
- Experimental Transport: WebRTC Direct
[New Release: Kubo v0.23.0](https://github.com/ipfs/kubo/releases/tag/v0.23.0)
[New Release: Boxo v0.15.0](https://discuss.ipfs.tech/t/boxo-v0-15-0-is-out/17175)
[New Release: Iroh v0.10.0](https://github.com/n0-computer/iroh/releases/tag/v0.10.0)
[Popular on the Forums](https://discuss.ipfs.tech/top?period=monthly)
- Help: [How to diagnose file not propagating to other Gateways?](https://discuss.ipfs.tech/t/how-to-diagnose-file-not-propagating-to-other-gateways/17071)
- Help: [Files pinned to my IPFS node dont show up on any other gateway](https://discuss.ipfs.tech/t/files-pinned-to-my-ipfs-node-dont-show-up-on-any-other-gateway/17132)
- Helia: [Connection closes during bitswap fetches](https://discuss.ipfs.tech/t/connection-closes-during-bitswap-fetches/17041)
[IPFS & Filecoin Ecosystem Roundup](https://www.youtube.com/watch?v=rn1nLUqJ4HM)
- The October Filecoin & IPFS Ecosystem Roundup is online! Check out the video for the latest updates, developments, and insights straight from the community. [Watch it here!](https://www.youtube.com/watch?v=rn1nLUqJ4HM)
[Helia Report 2023-10](https://pl-strflt.notion.site/Helia-Report-2023-10-ddd18180aec54ff9ad06f0771340b850)
[ProbeLab Network Weekly Reports](https://github.com/plprobelab/network-measurements/tree/master/reports/2023)
## **Around the Ecosystem 🌎**
[Call for submissions: awesome-ipfs](https://github.com/ipfs/awesome-ipfs)
- This is a community list of awesome projects, apps, tools, and services related to IPFS. We'd love to see more projects added to it, [so submit yours today!](https://github.com/ipfs/awesome-ipfs)
[IPFS Naming from Scaleaway](https://labs.scaleway.com/en/ipfs-naming/)
- Scaleway is opening a new service around called IPFS Naming. It is an IPNS managed service to solve the problem of managing and dynamically updating immutable IPFS addresses. [Learn more about it here!](https://labs.scaleway.com/en/ipfs-naming/)
[The Principles and Practices of IPFS](https://www.amazon.co.jp/o/ASIN/4297138379/gihyojp-22)
- This book has been translated to Japanese and will be published on November 8, [pre-order is available on Amazon.](https://www.amazon.co.jp/o/ASIN/4297138379/gihyojp-22)
[Peergos v0.14.0 featuring Nabu](https://peergos.net/public/peergos/releases)
- The Peergos team just published a new Peergos release, v0.14.0, in which they switch to their new Java implementation of IPFS, Nabu. This reduces idle bandwidth usage by about 10x, as well as CPU and RAM usage, and generally makes p2p stuff faster. [Check out the release notes here!](https://peergos.net/public/peergos/releases)
[Job Alert: Filebase is hiring a Senior Digital Marketing Strategist](https://wellfound.com/jobs/2807523-senior-digital-marketing-strategist)
- "Are you a creative and strategic thinker with a passion for driving digital marketing excellence? Do you thrive in dynamic, cutting-edge environments and have a deep understanding of the tech industry? Join us at Filebase, a leading player in the decentralized storage revolution, as a Senior Digital Marketing Strategist." [Learn more here!](https://wellfound.com/jobs/2807523-senior-digital-marketing-strategist)
[Reality Studies Podcast](https://www.youtube.com/watch?v=902OA94avbY)
- A recent episode of the Reality Studies podcast, by Protocol Labs Arts & Culture Advisor Jesse Damiani, features Asad J. Malik, CEO of Jadu AR. In 2021 and 2022, Jadu released successful NFT collections which were stored using IPFS. Now, owners of those NFTs can integrate them into the company's recently launched mobile AR game. [Watch the interview here!](https://www.youtube.com/watch?v=902OA94avbY)
## **Have something you want featured? 📥**
If you have something exciting or important that you think the IPFS community should know about, then you can [submit this form](https://airtable.com/appjqlMYucNiOYHl7/shrfPrKe112FW3ucv) to have it be considered for inclusion in the IPFS newsletter.
<a href="https://airtable.com/appjqlMYucNiOYHl7/shrfPrKe112FW3ucv" class="cta-button">Submit form</a>

View File

@@ -1,19 +1,5 @@
---
data:
- title: 'Just released: Kubo 0.24.0!'
date: "2023-11-08"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.24.0
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.23.0!'
date: "2023-10-05"
publish_date: null
path: https://github.com/ipfs/kubo/releases/tag/v0.23.0
tags:
- go-ipfs
- kubo
- title: 'Just released: Kubo 0.22.0!'
date: "2023-08-08"
publish_date: null

View File

@@ -4,39 +4,11 @@ type: Video
sitemap:
exclude: true
data:
- title: 'Built with IPFS - Mintter and The Hypermedia Protocol'
date: 2023-11-13
publish_date: 2023-11-13T12:00:00+00:00
path: https://www.youtube.com/watch?v=K3U6A4sgKo4
tags:
- Built with IPFS
- demo
- interview
- deep-dive
- title: 'This Month in IPFS - March 2023'
date: 2023-03-23
publish_date: 2023-03-23T12:00:00+00:00
path: https://www.youtube.com/watch?v=_vn52temkDU
tags:
- This Month in IPFS
- community
- demo
- interview
- title: 'This Month in IPFS - February 2023'
date: 2023-02-23
publish_date: 2023-02-23T12:00:00+00:00
path: https://www.youtube.com/watch?v=Cflrlv31oW8
tags:
- This Month in IPFS
- community
- demo
- interview
- title: 'This Month in IPFS - January 2023'
date: 2023-01-26
publish_date: 2023-02-06T12:00:00+00:00
path: https://www.youtube.com/watch?v=kRzNohHeRaM
tags:
- This Month in IPFS
- community
- demo
- interview

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 217 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB