Indexing NFT Transfers on Moonbeam with SQD (formerly Subsquid)¶
by Massimo Luraschi
Introduction¶
SQD (formerly Subsquid) is a data network that allows rapid and cost-efficient retrieval of blockchain data from 100+ chains using SQD's decentralized data lake and open-source SDK.
The SDK offers a highly customizable Extract-Transform-Load-Query stack and indexing speeds of up to and beyond 50,000 blocks per second when indexing events and transactions.
SQD has native and full support for the Ethereum Virtual Machine (EVM) and Substrate data. This allows developers to extract on-chain data from any of the Moonbeam networks, process EVM logs and Substrate entities (events, extrinsic, and storage items) in one single project, and serve the resulting data with one single GraphQL endpoint. With SQD, filtering by EVM topic, contract address, and block range are all possible.
This guide will explain how to create a SQD project (also known as a "Squid") from a template (indexing Moonsama transfers on Moonriver) and change it to index ERC-721 token transfers on the Moonbeam network. As such, you'll be looking at the Transfer
EVM event topics. This guide can be adapted for Moonbase Alpha as well.
Checking Prerequisites¶
For a Squid project to be able to run, you need to have the following installed:
Scaffold a Project From a Template¶
We will start with the frontier-evm
squid template, available through sqd init
. It is built to index EVM smart contracts deployed on Moonriver, but it can also index Substrate events. To retrieve the template and install the dependencies, run the following:
sqd init moonbeam-tutorial --template frontier-evm
cd moonbeam-tutorial
npm ci
Define the Entity Schema¶
Next, we ensure the Squid's data schema defines the entities that we want to track. We are interested in:
- Token transfers
- Ownership of tokens
- Contracts and their minted tokens
The EVM template already contains a schema file that defines Token
and Transfer
entities, but we need to modify it for our use case and add Owner
and Contract
entities:
type Token @entity {
id: ID!
owner: Owner
uri: String
transfers: [Transfer!]! @derivedFrom(field: "token")
contract: Contract
}
type Owner @entity {
id: ID!
ownedTokens: [Token!]! @derivedFrom(field: "owner")
}
type Contract @entity {
id: ID!
name: String
symbol: String
totalSupply: BigInt
mintedTokens: [Token!]! @derivedFrom(field: "contract")
}
type Transfer @entity {
id: ID!
token: Token!
from: Owner
to: Owner
timestamp: BigInt
block: BigInt!
}
It's worth noting a couple of things in this schema definition:
@entity
- signals that this type will be translated into an ORM model that is going to be persisted in the database@derivedFrom
- signals that the field will not be persisted in the database. Instead, it will be derived from the entity relations- type references (e.g.,
owner: Owner
) - when used on entity types, they establish a relation between two entities
TypeScript entity classes have to be regenerated whenever the schema is changed, and to do that, we use the squid-typeorm-codegen
tool. The pre-packaged commands.json
already comes with a codegen
shortcut, so we can invoke it with sqd
:
sqd codegen
The generated entity classes can then be browsed in the src/model/generated
directory. Each entity should have a .model.ts
file.
ABI Definition and Type Generation¶
SQD maintains tools for the automated generation of TypeScript classes to handle Substrate data sources (events, extrinsics, storage items). Possible runtime upgrades are automatically detected and accounted for.
Similar functionality is available for EVM indexing through the squid-evm-typegen
tool. It generates TypeScript modules for handling EVM logs and transactions based on a JSON ABI of the contract.
We will need such a module for the ERC-721-compliant part of the contracts' interfaces for our squid. Once again, the template repository already includes it, but it is still important to explain what needs to be done in case one wants to index a different type of contract.
The procedure uses a sqd
script from the template that uses squid-evm-typegen
to generate Typescript facades for JSON ABIs stored in the abi
folder. Place any ABIs you require for interfacing your contracts there and run:
sqd typegen:evm
The results will be stored at src/abi
. One module will be generated for each ABI file, including constants useful for filtering, functions for decoding EVM events, and functions defined in the ABI.
Processor Object and the Batch Handler¶
SQD SDK provides users with the SubstrateBatchProcessor
class. The SubstrateBatchProcessor
declaration and configurations are in the src/processor.ts
file. Its instances connect to SQD Network gateways at chain-specific URLs to get chain data and apply custom transformations. The indexing begins at the starting block and keeps up with new blocks after reaching the tip.
The SubstrateBatchProcessor
exposes methods to "subscribe" to specific data such as Substrate events, extrinsics, storage items, or, for EVM, logs, and transactions. The actual data processing is then started by calling the .run()
function, as seen in the src/main.ts
file. This will start generating requests to the gateway for batches of data specified in the configuration and will trigger the callback function every time a batch is returned by the gateway.
This callback function expresses all the mapping logic. This is where chain data decoding should be implemented and where the code to save processed data on the database should be defined.
Manage the EVM Contracts¶
Before we begin defining the mapping logic of the Squid, we will write a src/contracts.ts
utility module to manage the involved EVM contracts. It will export:
- Addresses of Exiled Racers Pilots and Exiled Racers Racecrafts
- A
Map
from the contract addresses to hardcodedContract
entity instances
Now, let's take a look at the complete contents of the file:
import { Contract } from './model';
export const pilots =
'0x515e20e6275CEeFe19221FC53e77E38cc32b80Fb'.toLowerCase();
export const racecrafts =
'0x104b904e19fBDa76bb864731A2C9E01E6b41f855'.toLowerCase();
export const contractMapping: Map<string, Contract> = new Map();
// Create a Contract entity object for the Exiled Racers Pilot contract
contractMapping.set(
pilots,
new Contract({
id: pilots,
name: 'Exiled Racers Pilot',
symbol: 'EXRP',
totalSupply: 1729n,
mintedTokens: [],
})
);
// Create a Contract entity object for the Exiled Racers Racecraft contract
contractMapping.set(
racecrafts,
new Contract({
id: racecrafts,
name: 'Exiled Racers Racecraft',
symbol: 'EXRR',
totalSupply: 1617n,
mintedTokens: [],
})
);
Configure the Processor¶
In the src/processor.ts
file, Squids instantiate the processor (a SubstrateBatchProcessor
in our case) and configure it.
We adapt the template code to process EVM logs for the two Exiled Racers contracts and point the processor data source setting to the Moonbeam SQD Network gateway URL. Here is the result:
import { assertNotNull } from '@subsquid/util-internal';
import {
BlockHeader,
DataHandlerContext,
SubstrateBatchProcessor,
SubstrateBatchProcessorFields,
Event as _Event,
Call as _Call,
Extrinsic as _Extrinsic,
} from '@subsquid/substrate-processor';
import * as erc721 from './abi/erc721';
import { pilots, racecrafts } from './contracts';
export const processor = new SubstrateBatchProcessor()
.setBlockRange({ from: 1250496 })
.setGateway('https://v2.archive.subsquid.io/network/moonbeam-substrate')
.setRpcEndpoint({
url: assertNotNull(process.env.RPC_ENDPOINT), // TODO: Add the RPC URL to your .env file
rateLimit: 10,
})
// Filter Transfer events from the Exiled Racers Pilot contract
.addEvmLog({
address: [pilots],
range: { from: 1250496 }, // Block of the first transfer
topic0: [erc721.events.Transfer.topic],
})
// Filter Transfer events from the Exiled Racers Racecraft contract
.addEvmLog({
address: [racecrafts],
range: { from: 1398762 }, // Block of the first transfer
topic0: [erc721.events.Transfer.topic],
})
// The timestamp is not provided unless we explicitly request it
.setFields({
block: {
timestamp: true,
},
});
export type Fields = SubstrateBatchProcessorFields<typeof processor>;
export type Block = BlockHeader<Fields>;
export type Event = _Event<Fields>;
export type Call = _Call<Fields>;
export type Extrinsic = _Extrinsic<Fields>;
export type ProcessorContext<Store> = DataHandlerContext<Store, Fields>;
If you are adapting this guide for Moonbase Alpha, be sure to update the data source to the correct network:
'https://v2.archive.subsquid.io/network/moonbase-substrate'
Note
This code expects to find a working Moonbeam RPC URL in the RPC_ENDPOINT
environment variable. You can get your own endpoint and API key from a supported Endpoint Provider.
Set it in the .env
file and SQD Cloud secrets if and when you deploy your Squid there. We tested the code using a public endpoint at wss://wss.api.moonbeam.network
; we recommend using private endpoints for production.
Define the Batch Handler¶
We'll need to rewrite the batch handler logic in the src/main.ts
file. We'll iterate over all of the events for each batch of blocks to find the EVM logs relative to the Exiled Racers contracts. We'll extract the from and to addresses and the token ID from the EVM logs. Then, we'll format this data as defined in the schema and save it to the database.
Here is the result:
import { Store, TypeormDatabase } from '@subsquid/typeorm-store';
import { In } from 'typeorm';
import { contractMapping, pilots, racecrafts } from './contracts';
import { Owner, Token, Transfer } from './model';
import * as erc721 from './abi/erc721';
import { processor, ProcessorContext, Event, Block } from './processor';
import { getEvmLog } from '@subsquid/frontier';
let contractsSaved = false;
processor.run(new TypeormDatabase(), async (ctx) => {
const transfersData: TransferData[] = [];
for (const block of ctx.blocks) {
for (const event of block.events) {
// If the event is an EVM log and the contract address emitting the log is
// from the Exiled Racers Pilots or Racecrafts contracts, process the logs
if (event.name === 'EVM.Log') {
if (event.args.address) {
if (
event.args.address.toLowerCase() == pilots ||
event.args.address.toLowerCase() == racecrafts
) {
// For each event, get the transfer data
const transfer = handleTransfer(block.header, event);
transfersData.push(transfer);
}
}
}
}
}
// Save the contract addresses if they haven't already been saved. This will
// only need to happen once, so that is why the contractsSaved flag is used
if (!contractsSaved) {
await ctx.store.upsert([...contractMapping.values()]);
contractsSaved = true;
}
await saveTransfers(ctx, transfersData);
});
type TransferData = {
id: string;
from: string;
to: string;
token: bigint;
timestamp?: bigint;
block: number;
contractAddress: string;
};
function handleTransfer(block: Block, event: Event): TransferData {
// Decode the event log into an EVM log
const evmLog = getEvmLog(event);
// Decode the EVM log to get the from and to addresses and the token ID
const { from, to, tokenId } = erc721.events.Transfer.decode(evmLog);
return {
id: event.id,
from,
to,
token: tokenId,
timestamp: block.timestamp ? BigInt(block.timestamp) : undefined,
block: block.height,
contractAddress: event.args.address,
};
}
async function saveTransfers(
ctx: ProcessorContext<Store>,
transfersData: TransferData[]
) {
// Format the token ID in SYMBOL-ID format
const getTokenId = (transferData: TransferData) =>
`${
contractMapping.get(transferData.contractAddress)?.symbol ?? ''
}-${transferData.token.toString()}`;
const tokensIds: Set<string> = new Set();
const ownersIds: Set<string> = new Set();
// Iterate over the transfers data to get the token IDs and owners
for (const transferData of transfersData) {
tokensIds.add(getTokenId(transferData));
ownersIds.add(transferData.from);
ownersIds.add(transferData.to);
}
// Use the token IDs and owners to check the database for existing entries
const tokens: Map<string, Token> = new Map(
(await ctx.store.findBy(Token, { id: In([...tokensIds]) })).map((token) => [
token.id,
token,
])
);
const owners: Map<string, Owner> = new Map(
(await ctx.store.findBy(Owner, { id: In([...ownersIds]) })).map((owner) => [
owner.id,
owner,
])
);
const transfers: Set<Transfer> = new Set();
// Process and format all of the data to save to the database
for (const transferData of transfersData) {
// Create a contract instance, which will be used to query the
// contract's tokenURI function below
const contract = new erc721.Contract(
ctx,
{ height: transferData.block },
transferData.contractAddress
);
// Try to get the from address from the owners pulled from the database
let from = owners.get(transferData.from);
// If there isn't an existing entry for this owner, create one
if (from == null) {
from = new Owner({ id: transferData.from });
owners.set(from.id, from);
}
// Try to get the to address from the owners pulled from the database
let to = owners.get(transferData.to);
// If there isn't an existing entry for this owner, create one
if (to == null) {
to = new Owner({ id: transferData.to });
owners.set(to.id, to);
}
const tokenId = getTokenId(transferData);
// Try to get the tokenId from the tokens pulled from the database
let token = tokens.get(tokenId);
// If there isn't an existing entry for this token, create one
if (token == null) {
token = new Token({
id: tokenId,
uri: await contract.tokenURI(transferData.token),
contract: contractMapping.get(transferData.contractAddress),
});
tokens.set(token.id, token);
}
// Now that the owner entity has been created, we can establish
// the connection between the Owner and the Token
token.owner = to;
// Since the Owner and Token entity objects have been created,
// the last step is to create the Transfer entity object
const { id, block, timestamp } = transferData;
const transfer = new Transfer({
id,
block: BigInt(block),
timestamp,
from,
to,
token,
});
transfers.add(transfer);
}
// Save all of the data from this batch to the database
await ctx.store.upsert([...owners.values()]);
await ctx.store.upsert([...tokens.values()]);
await ctx.store.insert([...transfers]);
}
Note
The contract.tokenURI
call accesses the state of the contract via a chain RPC endpoint. This can slow down indexing, but this data is only available in this way. You'll find more information on accessing state in the dedicated section of the SQD docs.
Launch and Set Up the Database¶
Squid projects automatically manage the database connection and schema via an ORM abstraction. In this approach, the schema is managed through migration files. Because we made changes to the schema, we need to remove the existing migration(s), create a new one, and then apply the new migration.
This involves the following steps:
-
Make sure you start with a clean Postgres database. The following commands drop-create a new Postgres instance in Docker:
sqd down sqd up
-
Generate the new migration (this will wipe any old migrations):
sqd migration:generate
Note
This command runs the following commands:
clean
- deletes all the build artifactsbuild
- creates a fresh build of the projectmigration:clean
- cleans the migration foldermigration:generate
- generates a database migration matching the TypeORM entities
When you launch the processor in the next section, your migrations will be applied automatically. However, if you need to apply them manually, you can do so using the sqd migration:apply
command.
Launch the Project¶
To launch the processor, run the following command (this will block the current terminal):
sqd process
Note
This command runs the following commands:
clean
- deletes all the build artifactsbuild
- creates a fresh build of the projectmigration:apply
- applies the database migrations
Finally, in a separate terminal window, launch the GraphQL server:
sqd serve
Visit localhost:4350/graphql
to access the GraphiQL console. From this window, you can perform queries such as this one to fetch a batch of owners:
query MyQuery {
owners(limit: 10) {
id
}
}
Or this other one, looking up the tokens owned by a given owner:
query MyQuery {
tokens(where: {owner: {id_eq: "0x09534CF342ad376DdBA6C3e94490C3f161F42ed2"}}) {
uri
contract {
id
name
symbol
totalSupply
}
}
}
Have fun playing around with queries; after all, it's a playground!
Publish the Project¶
SQD offers a SaaS solution to host projects created by its community. All templates ship with a deployment manifest file named squid.yml
, which can be used with the Squid CLI command sqd deploy
.
Please refer to the SQD Cloud Quickstart page on SQD's documentation site for more information.
Example Project Repository¶
You can view the template used here and many other example repositories on SQD's examples organization on GitHub.
SQD's documentation contains informative material, and it's the best place to start if you are curious about some aspects that were not fully explained in this guide.
| Created: April 5, 2022