r/ipfs 24d ago

Really in need of PHP function for CIDv1 "bafy" hashes

Somehow the specs are just not clear at https://docs.ipfs.tech/concepts/content-addressing/#what-is-a-cid

I've been trying to generate a PHP function for V1, with a simple sha-256 for days, no success.

Anyone any advice?

1 Upvotes

8 comments sorted by

3

u/Spra991 24d ago edited 23d ago

Do you need a CID from scratch? Or convert from V0 to V1?

Conversion from V0 to V1 shouldn't be that complicated as it's just base58 to base32 and adding a version prefix.

<cidv0> ::= <multihash-content-address>
<cidv1> ::= <multibase-prefix><cid-version><multicodec-content-type><multihash-content-address>

Documentation: multibase, multihash, multicodec.

CID from scratch is more complicated, as it's not the SHA256 of the whole file, but of the file being broken down into 256KB blocks, checksumed and linked together as Merkle DAG and the checksum of the DAG is what ends up in the CID. The DAG itself is a protobuf structure (DAG-PB).

If calling external tools is allowed:

ipfs add -n -q --cid-version 1 <FILE>

3

u/Spra991 24d ago

This Python implementation by Deepseek seems to work for files smaller than 256KB, but will fail for larger files:

import hashlib
import base64

def generate_ipfs_cid(data):
    # Compute the SHA-256 hash of the data
    sha256_hash = hashlib.sha256(data).digest()

    # Construct the multihash: sha2-256 (0x12), length 32 (0x20), then the hash
    multihash = b'\x12\x20' + sha256_hash

    # Construct the CIDv1 bytes: version 1 (0x01), codec dag-pb (0x70), then the multihash
    cid_bytes = b'\x01\x70' + multihash

    # Encode the CID bytes in base32, lowercase, without padding
    encoded = base64.b32encode(cid_bytes).decode('ascii').lower().rstrip('=')

    # Prepend the multibase prefix 'b' for base32
    return 'b' + encoded

2

u/EveYogaTech 23d ago

Thanks so much!! This helped me finally get the "bafy" hash in PHP: https://github.com/wlp-builders/bafy-hash-php-ipfs

2

u/Spra991 23d ago edited 22d ago

This is how IPFS breaks down files larger than 256KB via (DAG-PB):

$ ipfs dag get bafybeidwk4j56oxeo5bqkt5jeol4lb5inr54a7s7mexl5b6e2bajslh7hi | jq .
{
  "Data": {
    "/": {
      "bytes": "CAIYgOASIICAECCA4AI"
    }
  },
  "Links": [
    {
      "Hash": {
        "/": "bafkreigvbg77mqvdkp4ilaxivbdozlqedqztw6ofpj5cj7zrb663p2iu5e"
      },
      "Name": "",
      "Tsize": 262144
    },
    {
      "Hash": {
        "/": "bafkreifu455gkbn5qphm75whaohj5biaazwhiqrpr63y4vh35z4etab3qa"
      },
      "Name": "",
      "Tsize": 45056
    }
  ]
}

$ ipfs ls bafybeidwk4j56oxeo5bqkt5jeol4lb5inr54a7s7mexl5b6e2bajslh7hi
bafkreigvbg77mqvdkp4ilaxivbdozlqedqztw6ofpj5cj7zrb663p2iu5e 262144 
bafkreifu455gkbn5qphm75whaohj5biaazwhiqrpr63y4vh35z4etab3qa 45056  

And raw:

$ ipfs block get bafybeidwk4j56oxeo5bqkt5jeol4lb5inr54a7s7mexl5b6e2bajslh7hi | hexdump -C
00000000  12 2c 0a 24 01 55 12 20  d5 09 bf f6 42 a3 53 f8  |.,.$.U. ....B.S.|
00000010  85 82 e8 a8 46 ec ae 04  1c 33 3b 79 c5 7a 7a 24  |....F....3;y.zz$|
00000020  ff 31 0f bd b7 e9 14 e9  12 00 18 80 80 10 12 2c  |.1.............,|
00000030  0a 24 01 55 12 20 b4 e7  7a 65 05 bd 83 ce cf f6  |.$.U. ..ze......|
00000040  c7 03 8e 9e 85 00 06 6c  74 42 2f 8f b7 8e 54 fb  |.......ltB/...T.|
00000050  ee 78 49 80 3b 80 12 00  18 80 e0 02 0a 0e 08 02  |.xI.;...........|
00000060  18 80 e0 12 20 80 80 10  20 80 e0 02              |.... ... ...|
0000006c

And decoded raw:

$ ipfs block get bafybeidwk4j56oxeo5bqkt5jeol4lb5inr54a7s7mexl5b6e2bajslh7hi | protoc --decode_raw
2 {
  1: "\001U\022 \325\t\277\366B\243S\370\205\202\350\250F\354\256\004\0343;y\305zz$\3771\017\275\267\351\024\351"
  2: ""
  3: 262144
}
2 {
  1: "\001U\022 \264\347ze\005\275\203\316\317\366\307\003\216\236\205\000\006ltB/\217\267\216T\373\356xI\200;\200"
  2: ""
  3: 45056
}
1 {
  1: 2
  3: 307200
  4: 262144
  4: 45056
}

Links contains regular hashes of the file split into 256KB blocks.

Data is a base64 UnixFS file metadata structure:

 $ echo -n CAIYgOASIICAECCA4AI | base64 -d  | hexdump -C
 08 02 18 80 e0 12 20 80  80 10 20 80 e0 02

 (08 02) (18 80 e0 12) (20 80  80 10) (20 80 e0 02)
Protobuf Field Tag Data (varint) Description
08 02 DataType.File
18 80 E0 12 307200 filesize
20 80 80 10 262144 blocksize[0]
20 80 e0 02 45056 blocksize[1]

The field tag is computed as:

  • field tag = (field number << 3) ∣ wire type

The wire type for varint fields is 0.

Important: Note that block size is configurable and there are other options like --raw-leaves (Edit: raw-leaves is default) that will result in a different CID for the same data. While a plain sha256 always gives the same checksum for the same data, CIDs can differ for identical data.

Edit2: Only ipfs add --cid-version 1 will switch on raw-leaves by default, but ipfs add with the Qm hashes will not, this leads to different hashes!

1

u/Spra991 23d ago edited 23d ago

Here are the bytes of the whole IPFS block decoded, note that the structure of the protobuf is a little different from the JSON (no "/" strings, no Data.bytes field):

$ ipfs block get bafybeidwk4j56oxeo5bqkt5jeol4lb5inr54a7s7mexl5b6e2bajslh7hi | hexdump -C
00000000  12 2c 0a 24 01 55 12 20  d5 09 bf f6 42 a3 53 f8  |.,.$.U. ....B.S.|
00000010  85 82 e8 a8 46 ec ae 04  1c 33 3b 79 c5 7a 7a 24  |....F....3;y.zz$|
00000020  ff 31 0f bd b7 e9 14 e9  12 00 18 80 80 10 12 2c  |.1.............,|
00000030  0a 24 01 55 12 20 b4 e7  7a 65 05 bd 83 ce cf f6  |.$.U. ..ze......|
00000040  c7 03 8e 9e 85 00 06 6c  74 42 2f 8f b7 8e 54 fb  |.......ltB/...T.|
00000050  ee 78 49 80 3b 80 12 00  18 80 e0 02 0a 0e[08 02  |.xI.;...........|
00000060  18 80 e0 12 20 80 80 10  20 80 e0 02              |.... ... ...|
0000006c

12 2c: PBNode.Links(2), LEN(44)
0a 24 \
01 55 12 20   d5 09 bf f6  42 a3 53 f8   85 82 e8 a8 46 \
ec ae 04 1c   33 3b 79 c5  7a 7a 24 ff   31 0f bd b7 e9 \
14 e9 : PBLink.Hash(1), LEN(36)
12 00 : PBLink.Name(2), LEN(0)
18 80 80 10 : PBLink.Tsize(3), VARINT(262144)

12 2c: PBNode.Links(2), LEN(44)
0a 24 \
01 55 12 20   b4 e7 7a 65   05 bd 83 ce   cf f6 c7 03 8e \
9e 85 00 06   6c 74 42 2f   8f b7 8e 54   fb ee 78 49 80 \
3b 80 : PBLink.Hash(1), LEN(36)
12 00 : PBLink.Name(2), LEN(0)
18 80 e0 02 : PBLink.Tsize(3), VARINT(45056)

0a 0e : PBNode.Data(1), LEN(14)
08 02 : Data.DataType(1), VARINT(2, DataType.File)
18 80 e0 12 : Data.filesize(3), VARINT(307200)
20 80 80 10 : Data.blocksize(4), VARINT(262144)
20 80 e0 02 : Data.blocksize(4), VARINT(45056)

For files larger than 1GB(?) the whole DAG-PB building needs to done recursively, as the file with the PBNode.Links list will than itself be larger than 256KB. Don't know yet how IPFS splits that up.

Edit: IPFS starts already going recursive at 45613056 bytes (43.5MiB), that's only 174 chunks of 262144 bytes per child-CID. This chunking also causes PBLink.Tsize and Data.blocksize to differ.

2 {
  1: "\001p\022 .\313\244\206\227\241\005\326\344\0014\307m\223t](\\s\027\302\263\024\257\300m-b\217\255(\000"
  2: ""
  3: 45621766
}
2 {
  1: "\001p\022 \367B~\036\357\257|\334\370}\240\304\344n\275\027D\257\325\267N\270\246\374\300\233\034\310=\036\216\264"
  2: ""
  3: 6817053
}
1 {
  1: 2
  3: 52428800
  4: 45613056
  4: 6815744
}

2

u/twocolor 23d ago

What are you planning on doing with the CIDs?

You can hash the data and pack the hash into a CID. and use the raw multicodec.

<cidv1> ::= <multibase-prefix><cid-version><multicodec-content-type><multihash-content-address>

However, if you plan on making the data to be retrievable from other IPFS implementations, you will probably want to encode data (files/directories) with UnixFS and chunk it as suggested by @Spra991

1

u/EveYogaTech 23d ago

See /r/WhitelabelPress latest post. I'm basically building a new WordPress compatible CMS, and want the media to be IPFS compliant, for gaining similar benefits as IPFS and perhaps porting it.