Tangle Improvement Proposal (TIP) Repository

TIPs are improvement proposals for bettering the IOTA technology stack.

Building the IOTA ecosystem is a community effort, therefore we welcome anyone to propose, discuss and debate ideas that will later become formalized TIPs.

Propose new ideas

Do you have an idea how to improve the IOTA technology stack?

  • Head over to the discussions page to browse already submitted ideas or share yours!
  • Once your idea is discussed, you can submit a draft TIP (template here as a PR to the repository.
  • You will receive feedback from the TIP Editors and review from core devs.
  • Once accepted, your TIP is merged as Draft.
  • It is your responsibility to drive its implementation and to present a clear plan on how the new feature will be adopted by the network.
  • Once implementation is ready and testing yields satisfactory result, the TIP becomes Proposed.
  • Proposed TIPs that are supported by majority of the network become Active.

You may find more information about the TIP Process in TIP-1.

Stardust TIPs

Stardust is the next upgrade of the IOTA protocol that adds tokenization and smart contract chain support besides many more improvements. Browse the list of TIPs below with the Stardust tag to learn more about what changes.

List of TIPs

  • Last updated: 2023-10-19
  • The Status of a TIP reflects its current state with respect to its progression to being supported on the IOTA mainnet.
    • Draft TIPs are work in progress. They may or may not have a working implementation on a testnet.
    • Proposed TIPs are demonstrated to have a working implementation. These TIPs are supported on Shimmer, the staging network of IOTA.
    • Active TIPs are supported on the IOTA mainnet.
    • Replaced TIPs have been replaced by a newer TIP.
    • Obsolete TIPs are no longer in use.

image

#TitleDescriptionTypeLayerStatusInitial Target
1TIP ProcessPurpose and guidelines of the contribution frameworkProcess-Active-
2White Flag OrderingMitigate conflict spamming by ignoring conflictsStandardsCoreActiveChrysalis
3Uniform Random Tip SelectionPerform fast tip-selection to increase message throughputStandardsCoreActiveChrysalis
4Milestone Merkle ValidationAdd Merkle tree hash to milestone for local ledger state verificationStandardsCoreActiveChrysalis
5Binary To Ternary EncodingDefine the conversion between binary and ternary dataStandardsCoreActiveChrysalis
6Tangle MessageGeneralization of the Tangle transaction conceptStandardsCoreReplaced by TIP-24Chrysalis
7Transaction PayloadUTXO-based transaction structureStandardsCoreReplaced by TIP-20Chrysalis
8Milestone PayloadCoordinator issued milestone structure with Ed25519 authenticationStandardsCoreReplaced by TIP-29Chrysalis
9Local Snapshot File FormatFile format to export/import ledger stateStandardsInterfaceReplaced by TIP-35Chrysalis
10Mnemonic Ternary SeedRepresent ternary seed as a mnemonic sentenceStandardsIRCObsoleteLegacy IOTA
11Bech32 Address FormatExtendable address format supporting various signature schemes and address typesStandardsInterfaceReplaced by TIP-31Chrysalis
12Message PoWDefine message proof-of-work as a means to rate-limit the networkStandardsCoreActiveChrysalis
13REST APINode REST API routes and objects in OpenAPI SpecificationStandardsInterfaceReplaced by TIP-25Chrysalis
14Ed25519 ValidationAdopt ZIP-215 to explicitly define Ed25519 validation criteriaStandardsCoreActiveChrysalis
15Dust ProtectionPrevent bloating the ledger size with to dust outputsStandardsCoreReplaced by TIP-19Chrysalis
16Event APINode event API definitions in AsyncAPI SpecificationStandardsInterfaceReplaced by TIP-28Chrysalis
17WotsicideDefine migration from legacy WOTS addresses to post-Chrysalis Phase 2 networkStandardsCoreObsoleteChrysalis
18Multi-Asset Ledger and ISC SupportTransform IOTA into a multi-asset ledger that supports running IOTA Smart ContractsStandardsCoreActiveStardust
19Dust Protection Based on Byte CostsPrevent bloating the ledger size with dust outputsStandardsCoreActiveStardust
20Transaction Payload with New Output TypesUTXO-based transaction structure with TIP-18StandardsCoreActiveStardust
21Serialization PrimitivesIntroduce primitives to describe the binary serialization of objectsStandardsCoreActiveStardust
22IOTA Protocol ParametersDescribes the global protocol parameters for the IOTA protocolStandardsCoreActiveStardust
23Tagged Data PayloadPayload for arbitrary dataStandardsCoreActiveStardust
24Tangle BlockA new version of TIP-6 that renames messages to blocks and removes the Indexation Payload in favor of the Tagged Data Payload. Replaces TIP-6.StandardsCoreActiveStardust
25Core REST APINode Core REST API routes and objects in OpenAPI Specification. Replaces TIP-13.StandardsInterfaceActiveStardust
26UTXO Indexer REST APIUTXO Indexer REST API routes and objects in OpenAPI Specification.StandardsInterfaceActiveStardust
27IOTA NFT standardsDefine NFT metadata standard, collection system and creator royaltiesStandardsIRCActiveStardust
28Node Event APINode event API definitions in AsyncAPI Specification. Replaces TIP-16.StandardsInterfaceActiveStardust
29Milestone PayloadMilestone Payload with keys removed from essence. Replaces TIP-8.StandardsCoreActiveStardust
30Native Token Metadata StandardA JSON schema that describes token metadata format for native token foundriesStandardsIRCActiveStardust
31Bech32 Address Format for IOTA and ShimmerExtendable address format supporting various signature schemes and address types. Replaces TIP-11.StandardsInterfaceActiveStardust
32Shimmer Protocol ParametersDescribes the global protocol parameters for the Shimmer networkStandardsCoreActiveStardust
33Public Token RegistryDefines an open public registry for NFT collection ID and native tokens metadataStandardsIRCDraftStardust
34Wotsicide (Stardust update)Define migration from legacy W-OTS addresses to post-Chrysalis networks. Replaces TIP-17.StandardsCoreObsoleteStardust
35Local Snapshot File Format (Stardust Update)File format to export/import ledger state. Replaces TIP-9.StandardsInterfaceActiveStardust
37Dynamic Proof-of-WorkDynamically adapt the PoW difficultyStandardsCoreWithdrawnStardust

Need help?

If you want to get involved in the community, need help getting started, have any issues related to the repository or just want to discuss blockchain, distributed ledgers, and IoT with other people, feel free to join our Discord.

tip: 1
title: TIP Process
description:  Purpose and guidelines of the contribution framework
author: Levente Pap (@lzpap) 
discussions-to: https://github.com/iotaledger/tips/discussions
status: Active
type: Process
created: 2021-12-15

Abstract

A Tangle Improvement Proposal (TIP) is a design document providing information to the IOTA community, or describing a new feature for IOTA or its processes or environment. The TIP should provide a concise technical specification of the feature and a rationale for the feature.

TIPs are the primary mechanism for proposing new features and standards to the IOTA protocol and related applications, furthermore for collecting input from the wider community and documenting the design decisions that go into the IOTA technology.

TIPs are maintained as text files inside the repository, therefore the history and evolution of protocol features are transparent and well documented.

This TIP defines the TIP Process itself to establish a common way of working.

Motivation

The motivation of this TIP is to create a public platform to discuss improvement ideas related to the IOTA technology and define an easy-to-follow process of contributing to their development and implementation.

Specification

TIP Types

There are 3 types of TIPs:

  • A Standards Track TIP describes any change that affects most or all IOTA node implementations, such as a change to the network protocol, a change in transaction validity rules, or any change or addition that affects the interoperability of applications using IOTA. Standards Track TIPs consist of two parts, a design document and a reference implementation. Standards Track TIPs can be broken down into layers:
    • Core: includes improvements requiring a consensus fork (e.g. new transaction validation rules, change in protocol message layouts), as well as any change that concerns the protocol specification.
    • Networking: includes improvements around the networking layer of the network, e.g. gossip protocol or autopeering.
    • Interface: includes improvements around the client APIs of base layer nodes as well as around the interface definitions of IOTA Smart Contracts (ISC), such as contract schemas or ISC node APIs.
    • IRC: includes improvements around application-level standards and conventions such as contract standards, token standards or metadata format standards.
  • An Informational TIP describes an IOTA design issue, or provides general guidelines or information to the IOTA community, but does not propose a new feature. Informational TIPs do not necessarily represent an IOTA community consensus or recommendation, so users and implementors are free to ignore Informational TIPs or follow their advice.
  • A Process TIP describes a process surrounding IOTA, or proposes a change to (or an event in) a process. Process TIPs are like Standards Track TIPs but apply to areas other than the IOTA protocol itself. They may propose an implementation, but not to IOTA's codebase; they often require community consensus; unlike Informational TIPs, they are more than recommendations, and users are typically not free to ignore them. Examples include procedures, guidelines, changes to the decision-making process, and changes to the tools or environment used in IOTA development.

It is highly recommended that a TIP outlines a single key proposal, idea or feature; the narrower the scope of the TIP is, the easier it becomes to reach consensus on the proposed feature and incorporate it into the protocol. Several TIPs can form a bundle of changes when linked to each other.

TIP Format and Structure

TIPs must adhere to the format and structure requirements that are outlined in this document. A TIP is written in Markdown format and should have the following parts (optional parts are marked with a *):

NameDescription
PreambleRFC 822 style headers containing metadata about the TIP, including the TIP number, a short descriptive title (limited to a maximum of 44 characters), a description (limited to a maximum of 140 characters), and the author details. Irrespective of the category, the title and description should not include TIP number. See below for details.
AbstractA short summary of the technical issue being addressed by the TIP.
MotivationA motivation section is critical for TIPs that want to change the IOTA protocol. It should clearly explain why the existing protocol specification is inadequate to address the problem that the TIP solves. TIP submissions without sufficient motivation may be rejected outright.
SpecificationThe technical specification should describe the syntax and semantics of any new feature. The specification should be detailed enough to allow competing, interoperable implementations for any of the current IOTA platforms.
RationaleThe rationale fleshes out the specification by describing what motivated the design and why particular design decisions were made. It should describe alternate designs that were considered and related work, e.g. how the feature is supported in other languages. The rationale may also provide evidence of consensus within the community, and should discuss important objections or concerns raised during discussion.
Backwards Compatibility*All TIPs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The TIP must explain how the author proposes to deal with these incompatibilities. TIP submissions without a sufficient backwards compatibility treatise may be rejected outright.
Test Cases*Test cases for an implementation are mandatory for TIPs that are affecting consensus changes. Tests should either be inlined in the TIP as data or placed in the TIP folder.
Reference Implementation*An optional section that contains a reference/example implementation that people can use to assist in understanding or implementing this specification.
CopyrightAll TIPs must be in the public domain. See the bottom of this TIP for an example copyright waiver.

TIP Template

The template to follow for new TIPs is located in the repository.

TIP Process

Parties involved in the process are:

  • TIP author: you, the champion who proposes a new TIP. It is the responsibility of the TIP author to drive the progression of the TIP to Active status. This includes initiating public discussion and implementing the proposal as well.
  • TIP editor: they deal with administering the TIP process and ensure process requirements are fulfilled.
  • Technical Committee: technical experts of IOTA who evaluate new TIPs, provide feedback and ensure that only sound and secure features are added to the protocol.

TIP Statuses

The status of the TIP describes its current stage in the TIP process.

StatusDescription
IdeaAn idea for an improvement to the IOTA technology. Not yet tracked as an official TIP.
DraftThe idea has been formally accepted in the repository, and is being worked on by its authors.
ProposedThe TIP has a working implementation and has clear plans on how to progress to Active status.
ActiveThe TIP is deployed to the main network or some TIP specific adoption criteria has been met.
DeferredThe TIP author(s) are not working on the TIP currently, but plan to continue in the future. TIP is on hold.
RejectedThe TIP is rejected.
WithdrawnThe TIP has been withdrawn by the TIP author(s).
ReplacedThe TIP is replaced by a newer TIP. Must point to the new TIP in the header.
ObsoleteThe TIP is rendered obsolete by some future change.

TIP Workflow

How are new proposal get added to the protocol?

  1. All TIPs begin life as an Idea proposed in the public IOTA discussion forum, that is the GitHub Discussion page of the TIP repository. A public, open discussion should predate any formal TIP submission. If you want to propel your proposal to acceptance, you should make sure to build consensus and support in the community around your proposed changes already in the idea stage.

  2. Once the idea has been vetted, your next task is to submit a Draft TIP to the TIP repository as a pull request. Do not assign a TIP number yet to the draft, but make sure that the proposal is technically sound and follows the format and style guides of the TIP Process. Create a sub-folder under tips folder with the title of the draft (tips/title_of_draft/) and put all assets in this folder.

  3. A TIP editor reviews your PR and assigns a TIP number to the draft.

  4. The Technical Committee as well as the broader public evaluate the draft proposal and might ask for modifications or clarifications. The proposal can only be merged into the repository as a draft if it represents a net improvement and does not complicate the protocol unduly.

  5. The TIP is merged into the repo with Draft status by TIP editor/author.

  6. When a working implementation is presented and there are clear plans on how to progress the TIP to completion, the TIP author submits a subsequent PR that links its implementation to the TIP and progresses it to Proposed stage. The TIP is ready to be deployed on testnet.

  7. When a Proposed TIP is deemed to have met all appropriate criteria and its implementation has been demonstrated to work reliably in testnet environment, it is ready to be moved to the main network. Upon deployment, the TIP status must change to Active.

How can a TIP transition from one status to another?

image

A Draft TIP might be moved to Deferred status by the TIP author(s) when they are no longer working on the proposal, but plan to continue it in the future.

A Draft TIP might be moved to Withdrawn status by the TIP author(s).

A Draft TIP might be moved to Rejected status by TIP editor(s) or Technical Committee if it does not meet the appropriate TIP criteria, or no relevant progress has been demonstrated on the TIP for at least 3 years.

A Draft TIP might be moved to Proposed status by TIP author(s) if it is considered complete, has a working implementation and clear plans on how to progress it to Active status.

A Proposed TIP might be moved to Active status if a TIP specific adoption criteria has been met. For Core TIPs this means deployment on the main network.

A Proposed TIP might be moved to Rejected status by TIP editor(s) or Technical Committee if its implementation puts unduly burden and complexity on the protocol, or other significant problems are discovered during testing.

An Active TIP might be moved to Replaced status by a newer TIP. The replaced TIP must point to the TIP that replaces it.

An Active TIP might be moved to Obsolete status when the feature is deprecated.

How to champion the TIP Process as a TIP author?

  • Browse the idea discussion forum before posting a new TIP idea. Someone else might already have proposed your idea, or a similar one. Take inspiration from previous ideas and discussions.
  • It is your responsibility as a TIP author to build community consensus around your idea. Involve as many people in the discussion as you can. Use social media platforms, Discord or Reddit to raise awareness of your idea.
  • Submit a draft TIP as a PR to the TIP repository. Put extra care into following TIP guidelines and formats. TIPs must contain a link to previous discussions on the topic, otherwise your submissions might be rejected. TIPs that do not present convincing motivation, demonstrate lack of understanding of the design's impact, or are disingenuous about the drawbacks or alternatives tend to be poorly-received.
  • Your draft TIP gets a TIP number assigned by a TIP editor and receives review and feedback from the larger community as well as from the Technical Committee. Be prepared to revise your draft based on this input.
  • TIPs that have broad support are much more likely to make progress than those that don't receive any comments. Feel free to reach out to the TIP editors in particular to get help to identify stakeholders and obstacles.
  • Submitted draft TIPs rarely go through the process unchanged, especially as alternatives and drawbacks are shown. You can make edits, big and small, to the draft TIP to clarify or change the design, but make changes as new commits to the pull request, and leave a comment on the pull request explaining your changes. Specifically, do not squash or rebase commits after they are visible on the pull request.
  • When your draft TIP PR gets enough approvals from TIP editors and Technical Committee members, it can be merged into the repository, however, your job is far from complete! To move the draft into the next status (proposed), you have to demonstrate a working implementation of your TIP. For Core TIPs, seek help from protocol developers and/or client teams to coordinate the feature implementation. For IRCs for example you need to provide their implementation yourself.
  • You also need to present a clear plan on how the TIP will be moved to the Active status, by for example agreeing on a TIP deployment strategy with the Technical Committee or core developers.
  • To move your Draft TIP to the Proposed phase, submit a subsequent PR that links its implementation and devises its route to become Active. The latter might be an additional document in the TIP's folder, a link to a public discussion or a short description or comment on the PR itself.
  • To move your Proposed TIP to Active status you need to demonstrate that it has met its specific adoption criteria. For Core TIPs, this means that majority of network nodes support it. For other TIPs, especially for IRCs, adoption might mean that the standard is publicly available, well documented and there are applications building on it.

TIP Header Preamble

Each TIPs must have an RFC 822 style header preamble preceded and followed by three hyphens (---). The headers must appear in the following order. Headers marked with "*" are optional and are described below. All other headers are required.

FieldDescription
tipTIP number, or "?" before being assigned (assigned by TIP editor)
titleFew words describing the TIP, maximum 44 characters
description*One full short sentence
authorA comma separated list of the author's or authors' name + GitHub username (in parenthesis), or name and email (in angle brackets). Example, FirstName LastName (@GitHubUsername), FirstName LastName foo@bar.com, FirstName (@GitHubUsername) and GitHubUsername (@GitHubUsername)
discussions-to*The url pointing to the official discussion thread
statusCurrent status of the TIP. One of: Draft, Proposed, Active, Deferred, Rejected, Withdrawn, Obsolete or Replaced
typeTIP type, one of: Standards Track, Process or Informational
layer*Only for Standards Track, defines layer: Core, Networking, Interface or IRC
createdDate created on, in ISO 8601 (yyyy-mm-dd) format
requires*Link dependent TIPs by number
replaces*Older TIP being replaced by this TIP
superseded-by*Newer TIP replaces this TIP
withdrawal-reason*A sentence explaining why the TIP was withdrawn. (Optional field, only needed when status is Withdrawn)
rejection-reason*A sentence explaining why the TIP was rejected. (Optional field, only needed when status is Rejected)

Linking TIPs

References to other TIPs should follow the format TIP-N where N is the TIP number you are referring to. Each TIP that is referenced in an TIP MUST be accompanied by a relative Markdown link the first time it is referenced, and MAY be accompanied by a link on subsequent references. The link MUST always be done via relative paths so that the links work in this GitHub repository or forks of this repository. For example, you would link to this TIP with [TIP-1](../TIP-0001/tip-0001.md).

Auxiliary Files

Images, diagrams and auxiliary files should be included in the subdirectory of the TIP. When linking to an image in the TIP, use relative links such as [TIP Process Diagram](../TIP-0001/process.png).

Transferring TIP Ownership

It occasionally becomes necessary to transfer ownership of TIPs to a new champion. In general, we'd like to retain the original author as a co-author of the transferred TIP, but that's really up to the original author. A good reason to transfer ownership is because the original author no longer has the time or interest in updating it or following through with the TIP process, or has fallen off the face of the 'net (i.e. is unreachable or isn't responding to email). A bad reason to transfer ownership is because you don't agree with the direction of the TIP. We try to build consensus around a TIP, but if that's not possible, you can always submit a competing TIP.

If you are interested in assuming ownership of a TIP, send a message asking to take over, addressed to both the original author and the TIP editors. If the original author doesn't respond to the email in a timely manner, the TIP editors will make a unilateral decision (it's not like such decisions can't be reversed :)).

TIP Editors

The current TIP editors are:

  • Kumar Anirudha (@anistark, kumar.anirudha@iota.org)
  • Levente Pap (@lzpap, levente.pap@iota.org)

TIP Editor Responsibilities

TIP editors' essential role is to assist and guard the process of contributing to the IOTA ecosystem, provide help and directions to community members as well as to external contributors. If you have a question regarding the TIP process, reach out to them, they will point you to the right direction.

They ensure that only quality contributions are added as TIPs, provide support for TIP authors, furthermore monitor that the TIP process is fair, objective and well documented.

For each new TIP that comes in, an editor does the following:

  • Read the TIP to check if it is ready: sound and complete. The ideas must make technical sense, even if they don't seem likely to get to Active status.
  • The title should accurately describe the content.
  • Check the TIP for language (spelling, grammar, sentence structure, etc.), markup (GitHub flavored Markdown), code style.

If the TIP isn't ready, the editor will send it back to the author for revision, with specific instructions.

Once the TIP is ready to be merged as a draft, the editor will:

  • Assign a TIP number that does not conflict with other TIP numbers. It might be the PR number, but might also be selected as the next unused TIP number in line.
  • Merge the corresponding pull request.
  • Send a message back to the TIP author with the next step.

The editors don't pass judgment on TIPs. We merely do the administrative & editorial part.

Technical Committee

The Technical Committee consists of several core contributors of the IOTA ecosystem and core developers. Their job is to evaluate technical details of TIPs, judge their technical feasibility and safeguard the evolution of the protocol. Core improvement ideas must be carefully thought through and their benefits must outweigh their drawbacks.

In order for a draft TIP to be accepted into the repo, it must be signed-off by the Technical Committee. It is also the committee that gives the green light for drafts to become proposed or active.

Rationale

The TIP process is intended to replace the formerly adopted RFC process to achieve:

  • Simpler workflow and less rigid process structure,
  • Broader platform for ideation and early phase improvement discussions,
  • A layered protocol specification approach that can describe not only core components, but also higher layer protocols and application-level conventions.

In order not to reinvent the wheel, the TIP Process draws heavily on the BIP and EIP processes.

Backwards Compatibility

  • The current iotaledger/protocol-rfcs repository will be renamed to iotaledger/tips.
  • Merged RFCs will receive a TIP number and header with Active status.
  • PRs in the repo will be mapped as Draft TIPs, either modifications to existing TIPs or new ones.
  • The GitHub Discussion page of the repository will be restructured to accommodate TIP idea discussions.

References

  • BIP-1 and BIP-2, Bitcoin Improvement Proposal Purpose and Guidelines
  • EIP-1, Ethereum Improvement Proposal Purpose and Guidelines
  • CIP-1, Cardano Improvement Proposal Process

Copyright and related rights waived via CC0.

tip: 2
title: White Flag Ordering
description: Mitigate conflict spamming by ignoring conflicts
author: Thibault Martinez (@thibault-martinez) 
discussions-to: https://github.com/iotaledger/tips/pull/5, https://github.com/iotaledger/tips/pull/30
status: Active
type: Standards
layer: Core
created: 2020-03-06

Summary

This RFC is part of a set of protocol changes, Chrysalis, aiming at improving the network before Coordicide is complete.

The feature presented in this RFC, White Flag, allows milestones to confirm conflicting messages by enforcing deterministic ordering of the Tangle and applying only the first message(s) that will not violate the ledger state.

The content of this RFC is based on Conflict white flag: Mitigate conflict spamming by ignoring conflicts.

Motivation

  • Eliminates the Conflict spamming attack;
  • As conflicts are ignored in the balance computation, they do not need to be considered during tip selection of the nodes allowing much easier tip selection algorithms leading to increased TPS;
  • By using this approach in combination with an appropriate TSA, during regular use, no honest message will ever require re-attaching leading to increased CTPS;
  • Does not come with added computation complexity by integrating nicely into already existing algorithms;

Detailed design

First, let us define what it means for a message A to be:

  • referenced (indirectly or directly) by message B: A is contained in the past cone of B;
  • confirmed: A is referenced by a milestone;
  • applied: A is confirmed and applied to the ledger state;
  • ignored: A is confirmed but not applied because it is semantically invalid;
  • conflicting: A would lead to an invalid ledger state if applied;

In case of conflicting messages with White Flag, a node applies only one message to the ledger state and ignores all the others. For this to work, all the nodes need to be sure they are all applying the same message; hence, the need for a deterministic ordering of the Tangle.

First, this RFC proposes a deterministic ordering of the Tangle, then it explains which message is selected in case of conflicts.

Note: The past-cone of milestone can only contain syntactically valid messages. If an invalid message is encountered, operations must be stopped immediately.

Deterministically ordering the Tangle

When a new milestone is broadcasted to the network, nodes will need to order the set of messages it confirms.

A subset of the Tangle can be ordered depending on many of its properties (e.g. alphanumeric sort of the message hashes); however, to compute the ledger state, a graph traversal has to be done so it can be used to order the messages in a deterministic order with no extra overhead.

This ordering is then defined as a topological ordering because it respects the dependency of messages, ensuring that parents of a message are applied before it. Since there are multiple valid topological orders for the same graph and, to avoid conflicting ledger states, it is required that all nodes apply messages in the exact same order.

For this reason, this RFC proposes an order that has to be rigorously followed by all node implementations. This order is the topological ordering generated by a post-order Depth-First Search (DFS) starting from a milestone message, going through its parents (in the order they appear in the message) and finally analysing the current message. Since only a subset of messages is considered, the stopping condition of this DFS is reaching messages that are already confirmed by another milestone.

Applying first message(s) that does not violate the ledger state

If a conflict is occurring in the set of messages confirmed by a milestone, nodes have to apply the first - with regards to the order previously proposed - of the conflicting messages to the ledger and ignore all the others.

Once a message is marked as ignored, this is final and cannot be changed by a later milestone.

Since the ledger state is maintained from one milestone to another, a message conflicting with a message already confirmed by a previous milestone would also be ignored.

Pseudo-code

The following algorithm describes the process of updating the ledger state which is usually triggered by the arrival of a new milestone confirming many new messages.

Pseudo-code means that implementation details such as types, parameters, ..., are not important but that the logic has to be followed with care when implementing a node to avoid differences in the ledger state.

update_ledger_state(ledger, milestone, solid_entry_points) {
    s = new Stack()
    visited = new Set()

    s.push(milestone)

    while (!s.is_empty()) {
        curr = s.peek()
        next = null

        // Look for the first eligible parent that was not already visited
        for parent in curr.parents {
          if (!solid_entry_points.contains(parent) && !parent.confirmed && !visited.contains(parent)) {
            next = parent
            break
          }
        }

        // All parents have been visited, apply and visit the current message
        if next == null {
          ledger.apply(curr)
          visited.add(curr)
          s.pop()
        }
        // Otherwise, go to the parent
        else {
          s.push(next)
        }
    }
}

Notes:

  • solid_entry_points is a set of hashes that are considered solid even though we do not have them or their past in a database. They often come from a snapshot file and allow a node to solidify without needing the full tangle history. The hash of the genesis message is also a solid entry point.
  • confirmation_index is the index of the milestone that confirmed the message.

Example

In this example, there are 26 messages labeled from A to Z. The set of red messages {A, B, C, E, F, H} is confirmed by milestone H. The set of purple messages {D, G, J, L, M, N, K, I, O, S, R, V} is confirmed by milestone V. The set of blue messages {Q, U, X, Y, Z, W, T, P} is confirmed by another milestone.

Applying the previously shown algorithm on the purple set produces the topological order {D, G, J, L, M, R, I, K, N O, S, V}.

Here, message G and message O, both confirmed by milestone V, are conflicting. Since in the topological order just produced, G appears before O, G is applied to the ledger and O is ignored.

Drawbacks

  • The ledger state is now only well-defined at milestones, meaning that we have to wait until each milestone is issued in order to confirm a spend;
  • Everything that is seen is now part of the Tangle, including double-spend attempts, meaning that malicious data will now be saved as part of the consensus set of the Tangle;
  • To prove that a specific (non-milestone) message is valid, it is no longer sufficient to just provide the "path" to its confirming milestone, but instead all messages in its past cone.

Rationale and alternatives

The main alternative to White Flag is what has been done so far i.e. not allowing conflicting messages confirmation. As explained in this RFC, this comes with added complexity when performing a Tip Selection Algorithm because a node has to constantly check for ledger inconsistencies.

As part of Chrysalis and coupled with an adequate Tip Selection Algorithm, White Flag is an improvement of the network by allowing a potential increase of TPS/CTPS.

Unresolved questions

A node consumes and produces snapshot files and bases the computation of its ledger state on them. In the current network, if one of these files was tampered with and fed to a node, it would eventually lead to an invalid ledger state where a message confirmed by a milestone would actually be a double spend. This situation would be detected by the node and it would stop its activities as a security measure. However, with White Flag, such messages would be confirmed by milestones but ignored by the node, the fake snapshot then going unnoticed. The ledger state would then become more and more corrupted and the view of the balances completely wrong, errors just accumulating over time. The need for a snapshot verification mechanism is then amplified by the implementation of White Flag. This mechanism being out of the scope of this RFC, it will be described in another RFC.

Copyright

Copyright and related rights waived via CC0.

tip: 3
title: Uniform Random Tip Selection
description: Perform fast tip-selection to increase message throughput
author: Luca Moser (@luca-moser) 
discussions-to: https://github.com/iotaledger/tips/pull/8
status: Active
type: Standards
layer: Core
created: 2020-03-09

Summary

Weighted Uniform Random Tip Selection on a subset enables a node to perform fast tip-selection to increase message throughput. The algorithm selects tips which are non-lazy to maximize confirmation rate.

Motivation

Because of the white-flag confirmation algorithm, it is no longer necessary to perform complex tip-selection which evaluates ledger mutations while walking. Therefore, a more simple, better performing algorithm can be used to select tips, which in turn increases overall message throughput.

To maximize confirmation rate however, the algorithm needs to return tips which are non-lazy. Non-lazy in this context means that a tip does not attach to a cone of messages which is too far in the past. Such a cone is likely to be already confirmed and does not contribute to the rate of newly confirmed messages when a milestone is issued.

Detailed design

Definitions:

  • Direct Approvers - The set of messages which directly approve a given message.
  • Approvee - The directly approved message of a given message.
  • Solid Message - A message that its past cone is known to the node.
  • Valid Message- A message which is syntactically valid.
  • Tip - A valid solid message that doesn't have approvers. Its past cone contains only valid messages.
  • Score - An integer assigned to a tip. The tip selection algorithm uses it to determine how to select tips.
  • Confirmed Root Message - The set of first seen messages which are confirmed by a previous milestone when we walk the past cone of a given message. The walk stops on a confirmed message.
    Note that the red marked milestone is also a Confirmed Root Message. sdf
  • Message Snapshot Index (MSI) defines the index of the milestone which confirmed a given message.
  • Oldest Message Root Snapshot Index (OMRSI) defines the lowest milestone index of a set of Confirmed Root Messages of a given messages.
  • Youngest Message Root Snapshot Index (YMRSI) defines the highest milestone index of a set of Confirmed Root Messages of a given message.
  • Latest Solid Milestone Index (LSMI) the index of the latest solid milestone.
  • Below Max Depth (BMD) defines a threshold value up on which it is decided on whether a message is not relevant in relation to the recent parts of the Tangle. The current BMD for mainnet nodes is 15 milestones, which means that messages of which their OMRSI in relation to the LSMI is more than 15, are "below max depth".

OMRSI / YMRSI example

Given the blue PoV message, the OMRSI of it is milestone 1 and YMRSI milestone 2. Note that, here again, the milestones are also Confirmed Root Messages. sdf

Milestone based tip scoring

The milestone based scoring defines a tip's score by investigating the tip's relation to the cone it approves and previous issued milestones.

A tip can have one of 3 score states:

  • 0: The tip is lazy and should not be selected.
  • 1: The tip is somewhat lazy.
  • 2: The tip is a non-lazy tip.

Definitions:

  • C1: Max allowed delta value for the YMRSI of a given message in relation to the current LSMI.
  • C2: Max allowed delta value between OMRSI of a given message in relation to the current LSMI.
  • M: Max allowed delta value between OMRSI of the given message in relation to the current LSMI. M is the below max depth (BMD) parameter.

Recommended defaults:

  • C1 = 8 milestones
  • C2 = 13 milestones
  • M = 15 milestones

Scoring Algorithm (pseudo code):


enum Score (
    LAZY = 0
    SEMI_LAZY = 1
    NON_LAZY = 2
)

const (
    C1 = 8
    C2 = 13
    M = 15
)

func score(tip Tip) Score {
    
    // if the LSMI to YMRSI delta is over C1, then the tip is lazy
    if (LSMI - YMRSI(tip) > C1) {
        return Score.LAZY
    }
    
    // if the OMRSI to LSMI delta is over M/below-max-depth, then the tip is lazy
    if (LSMI - OMRSI(tip) > M) {
        return Score.LAZY
    }
    
    if (LSMI - OMRSI(tip) > C2) {
        return Score.SEMI_LAZY
    }

    return Score.NON_LAZY
}

Random Tip-Selection

A node should keep a set of non-lazy tips (score 2). Every time a node is asked to select tips to be approved, it will pick randomly from the set. A node must not execute tip-selection if it is not synchronized.

A tip should not be removed from the tips set immediately after it was selected in select(), to make it possible for it to be re-selected, which in turn makes the Tangle wider and improves synchronization speed. A tip is removed from the tips set if X amount of direct approvers are reached or if a certain amount of time T passed. It is recommended to use X = 2 and T = 3 but the threshold should be configurable.

Purpose Of Semi-Lazy Tips

Semi-Lazy tips are not eligible for tip-selection, but the coordinator node may implement a tip selection algorithm that confirms semi-lazy tips. Semi-lazy tips will usually be left behind, but parties interested in having them confirmed are incentivized to run spammers that will actively reduce the amount of semi-lazy tips eligible for coordinator's tip selection. Given a coordinator that chooses semi-lazy tips, running such spammers may get those messages confirmed before they become lazy.

Drawbacks

Depending on when and how often YMRSI/OMRSI values are computed, this tip-selection could still have a slow runtime, as one would need to constantly walk down the Tangle to compute those values. However, smart caching might resolve this issue.

Rationale and alternatives

The previous tip-selection was written in accordance to the original IOTA whitepaper, as it also functioned as part of the consensus mechanism. However, relatively soon it became apparent that the cumulative weight computation was too heavy for an actual high throughput scenario and, as such, the CW calculation is currently not used within node implementations at all.

Because confirmations with the white-flag approach no longer approve cones only with state mutations, which are consistent with a previous ledger state, it makes sense to alter the tip-selection to provide a fast way to get tips to approve with one's own message. The only important thing is to disincentive lazy behaviour to be able to maximize confirmation rate.

Unresolved questions

When to compute the score and YMRSI/OMRSI of a transaction?

It is not yet clear when or how often the YMRSI/OMRSI values of a transaction should be updated. If the values are only computed once after a transaction became solid, the YMRSI/OMRSI might not resemble the true values, as subsequent milestones might confirm transactions within the same cone the given transaction approved.

Currently, we suggest recomputing the values every time a new milestone solidifies. Since different tips indirectly reference the same transactions, this computation can be optimized.

Copyright

Copyright and related rights waived via CC0.

tip: 4
title: Milestone Merkle Validation
description: Add Merkle tree hash to milestone for local ledger state verification
author: Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/12, https://github.com/iotaledger/tips/pull/31
status: Active
type: Standards
layer: Core
created: 2020-05-04

Summary

In the IOTA protocol, nodes use the milestones issued by the Coordinator to reach a consensus on which transactions are confirmed. This RFC adds extra information to each milestone in the form of a Merkle tree hash, which allows nodes to explicitly validate their local view of the ledger state against the Coordinator's. This mechanism further enables a simple cryptographic proof of inclusion for transactions confirmed by the particular milestone.

Motivation

With the changes proposed in the IOTA protocol TIP-2, milestones are allowed to reference conflicting transactions. These conflicts are then resolved by traversing the newly confirmed transactions in a global, deterministic order and applying the corresponding ledger state changes in that order. Conflicts or invalid transactions are ignored, but stay in the Tangle. This approach has considerable advantages in terms of network security (e.g. protection against conflict spamming attacks) and network performance. However, a milestone no longer represents the inclusion state of all its referenced transactions, but only marks the order in which transactions are checked against the ledger state and then, if not violating, applied. This has two significant drawbacks:

  • Milestone validation: In the IOTA protocol, each node always compares the milestones issued by the Coordinator against its current ledger state. Discrepancies are reported and force an immediate halt of the node software. However, in the white flag proposal this detection is no longer possible as any milestone can lead to a valid ledger state by ignoring the corresponding violating ledger changes.
  • Proof of inclusion: In the pre-white-flag protocol, the inclusion of transaction t in the Tangle, and thus, the ledger, can be shown by providing an audit path of referencing transactions from t to its confirming milestone. In the white flag proposal this is no longer possible, as such an audit path does not provide any information on whether the transaction has been included or ignored.

Note that the white flag proposal only changes the behavior of conflicting transactions. Messages without a transaction payload can never conflict and are thus always included in Tangle when they are first referenced by a milestone. As such, these messages do not need to be considered by the RFC and their processing and inclusion proof remain unchanged.

Where previously the structure of the Tangle alone was sufficient to address those issues, this RFC proposes to add the Merkle tree hash of all the valid (i.e. not ignored) newly confirmed transactions to the signed part of a milestone. This way, each IOTA node can check that the hash matches its local ledger state changes or provide a Merkle audit path for that milestone to prove the inclusion of a particular transaction.

Detailed design

Creating a Milestone

  • Perform tip selection to choose the parents referenced by the milestone.
  • Determine the topological order according to TIP-2 of the referenced messages that are not yet confirmed by a previous milestone.
  • Construct the list D consisting of the message IDs of all the not-ignored state-mutating transaction payloads in that particular order. A UTXO transaction is considered state-mutating, if it creates a new output.
  • Compute the 32-byte Merkle tree hash H = MTH(D).
  • Prepare the milestone payload as described in TIP-8, where the field Inclusion Merkle Root is set to H.

Milestone validation

  • Verify the signature of the milestone m.
  • Construct the ordered list D of the message IDs of all the not-ignored state-mutating transaction payloads m confirms.
  • Compute H = MTH(D).
  • Verify that the field Inclusion Merkle Root in m matches H.

Proof of inclusion

  • Identify the confirming milestone m of the input transaction t.
  • Determine the ordered list of the not-ignored messages m confirms.
  • Compute the Merkle audit path of t with respect to the Merkle tree for this ordered list.
  • Provide the audit path as well as m as proof of inclusion for t.

Cryptographic components

Merkle hash trees

This RFC uses a binary Merkle hash tree for efficient auditing. In general, any cryptographic hashing algorithm can be used for this. However, we propose to use BLAKE2b-256, as it provides a faster and more secure alternative to the widely used SHA-256. In the following we define the Merkle tree hash (MTH) function that returns the hash of the root node of a Merkle tree:

  • The input is a list of binary data entries; these entries will be hashed to form the leaves of the tree.
  • The output is a single 32-byte hash.

Given an ordered list of n input strings Dn = {d1, d2, ..., dn}, the Merkle tree hash of D is defined as follows:

  • If D is an empty list, MTH(D) is the hash of an empty string:
    MTH({}) = BLAKE2().
  • If D has the length 1, the hash (also known as a leaf hash) is:
    MTH({d1}) = BLAKE2( 0x00 || d1 ).
  • Otherwise, for Dn with n > 1:
    • Let k be the largest power of two less than n, i.e. k < n ≤ 2k.
    • The Merkle tree hash can be defined recursively:
      MTH(Dn) = BLAKE2( 0x01 || MTH({d1, ..., dk}) || MTH({dk+1, ..., dn}) ).

Note that the hash calculations for leaves and nodes differ. This is required to provide second preimage resistance: Without such a prefix, for a given input D an attacker could replace two (or more) leaves with their corresponding aggregated node hash without changing the final value of MTH(D). This violates the fundamental assumption that, given MTH(D), it should be practically impossible to find a different input D' leading to the same value. Adding a simple prefix mitigates this issue, since now leaf and node hashes are computed differently and can no longer be interchanged.

Note that we do not require the length of the input to be a power of two. However, its shape is still uniquely determined by the number of leaves.

Merkle audit paths

A Merkle audit path for a leaf in a Merkle hash tree is the shortest list of additional nodes in a Merkle tree required to compute the Merkle tree hash for that tree. At each step towards the root, a node from the audit path is combined with a node computed so far. If the root computed from the audit path matches the Merkle tree hash, then the audit path is proof that the leaf exists in the tree.

Example

Merkle tree with 7 leaves:

  • input D:
    1. 52fdfc072182654f163f5f0f9a621d729566c74d10037c4d7bbb0407d1e2c649
    2. 81855ad8681d0d86d1e91e00167939cb6694d2c422acd208a0072939487f6999
    3. eb9d18a44784045d87f3c67cf22746e995af5a25367951baa2ff6cd471c483f1
    4. 5fb90badb37c5821b6d95526a41a9504680b4e7c8b763a1b1d49d4955c848621
    5. 6325253fec738dd7a9e28bf921119c160f0702448615bbda08313f6a8eb668d2
    6. 0bf5059875921e668a5bdf2c7fc4844592d2572bcd0668d2d6c52f5054e2d083
    7. 6bf84c7174cb7476364cc3dbd968b0f7172ed85794bb358b0c3b525da1786f9f
  • Merkle tree hash H = MTH(D) (32-byte): bf67ce7ba23e8c0951b5abaec4f5524360d2c26d971ff226d3359fa70cdb0beb
root: bf67ce7ba23e8c0951b5abaec4f5524360d2c26d971ff226d3359fa70cdb0beb
 ├─ node: 03bcbb3cf4314eab2f5ae68c767ff0a5fec4573c865728231f71d596fd867b56
 │  ├─ node: ae4505f4cfae93586e23958ca88d35d2f34d43def49786b6d0d4224b819f4cda
 │  │  │  ┌ msg id: 52fdfc072182654f163f5f0f9a621d729566c74d10037c4d7bbb0407d1e2c649
 │  │  ├──┴ leaf: 3d1399c64ff0ae6a074afa4cd2ce4eab8d5c499c1da6afdd1d84b7447cc00544
 │  │  │  ┌ msg id: 81855ad8681d0d86d1e91e00167939cb6694d2c422acd208a0072939487f6999
 │  │  └──┴ leaf: 83b0b255014e9a3656f0004a3f17943a20b715ef9c3e7cb85a6b2abac15e00d0
 │  └─ node: 54d51291aca22ce5b04cd3e6584fa3026ebe86ef86f0a6dfb47ab843801d4b38
 │     │  ┌ msg id: eb9d18a44784045d87f3c67cf22746e995af5a25367951baa2ff6cd471c483f1
 │     ├──┴ leaf: ad4bc0a34b27f37810f2ff3a8177ecc98402f8f59a06270f9d285fdf764e45fe
 │     │  ┌ msg id: 5fb90badb37c5821b6d95526a41a9504680b4e7c8b763a1b1d49d4955c848621
 │     └──┴ leaf: ffb3a7c6bea8f9fdcfb26f4701ad6e912a6076e1a40663607dbe110ebfc9a571
 └─ node: ce22d5bc728023e7ab6a9eb8f58baf62b9565fc8baeef4b377daa6709dbe598c
    ├─ node: e14c8af1258005cd0dbed88f0c5885c6988f319bb8f24272a7495592b873c169
    │  │  ┌ msg id: 6325253fec738dd7a9e28bf921119c160f0702448615bbda08313f6a8eb668d2
    │  ├──┴ leaf: 1c062628a7a147cc6a4defa655ce6c4ae5b838b4b4cd81b12e8924b5b4b5cca6
    │  │  ┌ msg id: 0bf5059875921e668a5bdf2c7fc4844592d2572bcd0668d2d6c52f5054e2d083
    │  └──┴ leaf: 2ef4e2ad06b8c8ae1fd4b28b5ed166829533fbfff1f6c14218358537da277fa3
    │  ┌ msg id: 6bf84c7174cb7476364cc3dbd968b0f7172ed85794bb358b0c3b525da1786f9f
    └──┴ leaf: 7ec774ebc33ed4ca298e8a1cf1f569e36c6784467d63b055efd7612abe2858a4

Drawbacks

  • The computation of the Merkle tree hash of Dn requires 2n-1 evaluations of the underlying hashing algorithm. This makes the milestone creation and validation computationally slightly more expensive.

Rationale and alternatives

It is a crucial security feature of the IOTA network that nodes are able to validate the issued milestones. As a result, if the Coordinator were to ever send an invalid milestone, such as one that references counterfeit transactions, the rest of the nodes would not accept it. In a pure implementation of TIP-2 this feature is lost and must be provided by external mechanisms. A Merkle tree hash provides an efficient, secure and well-established method to compress the information about the confirmed transactions in such a way, that they fit in the milestone transaction.

In this context, it could also be possible to use an unsecured checksum (such as CRCs) of the message IDs instead of a Merkle tree hash. However, the small benefit of faster computation times does no justify the potential security risks and attack vectors.

Reference implementation

Example Go implementation in wollac/iota-crypto-demo:

Copyright

Copyright and related rights waived via CC0.

tip: 5
title: Binary To Ternary Encoding
description: Define the conversion between binary and ternary data
author: Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/15
status: Active
type: Standards
layer: Core
created: 2020-06-08

Summary

In the IOTA protocol, a transaction is represented as ternary data. However, sometimes it is necessary to store binary data (e.g. the digest of a binary hash function) inside of a transaction. This requires the conversion of binary into ternary strings. The IOTA client libraries support the opposite conversion that encodes 5 trits as 1 byte (sometimes also referred to as t5b1 encoding), which is used for network communication and in storage layers. This RFC describes the corresponding counterpart to encode 1 byte as 6 trits.

Motivation

A byte is composed of 8 bits that can represent 28 = 256 different values. On the other hand, 6 trits can hold 36 = 729 values while 5 trits can hold 35 = 243 values. Therefore, the most memory-efficient way to encode one byte requires the use of 6 trits. Although there exist many potential encoding schemes to convert binary data into ternary, the proposed version has been designed to directly match the widely used t5b1 encoding.

It is important to note that the b1t6 encoding presented in this RFC does not replace the current t5b1 encoding (or its corresponding decoding): t5b1 is for example used to store trytes in a binary database, while b1t6 will be used to attach binary data to an IOTA transaction.

Detailed design

Bytes to trits

In order to encode a binary string S into ternary, each byte of S is interpreted as a signed (two's complement) 8-bit integer value v. Then, v is encoded as a little-endian 6-trit string in balanced ternary representation. Finally, the resulting groups of trits are concatenated.

This algorithm can also be described using the following pseudocode:

T ← []
foreach byte b in S:
  v ← int8(b)
  g ← IntToTrits(v, 6)
  T ← T || g

Here, the function IntToTrits converts a signed integer value into its corresponding balanced ternary representation in little-endian order of the given length. The functionality of IntToTrits exactly matches the one used to e.g. encode the transaction values as trits in the current IOTA protocol.

Trits to bytes

Given a trit string T as the result of the previous encoding, T is converted back to its original byte string S by simply reversing the conversion:

S ← []
foreach 6-trit group g in T:
  v ← TritsToInt(g)
  b ← byte(v)
  S ← S || b

Examples

  • I
    • binary (hex): 00
    • ternary (trytes): 99
  • II
    • binary (hex): 0001027e7f8081fdfeff
    • ternary (trytes): 99A9B9RESEGVHVX9Y9Z9
  • III
    • binary (hex): 9ba06c78552776a596dfe360cc2b5bf644c0f9d343a10e2e71debecd30730d03
    • ternary (trytes): GWLW9DLDDCLAJDQXBWUZYZODBYPBJCQ9NCQYT9IYMBMWNASBEDTZOYCYUBGDM9C9

Drawbacks

  • Conceptually, one byte can be encoded using log3(256) ≈ 5.0474 trits. Thus, encoding 1 byte as 6 trits consumes considerably more memory than the mathematical minimum.
  • Depending on the actual implementation the conversion might be malleable: E.g. since byte(-1) = 0xff and byte(255) = 0xff, both Z9 (-1) and LI(255) could be decoded as ff. However, LI can never be the result of a valid b1t6 encoding. As such, the implementation must reject such invalid inputs.

Rationale and alternatives

There are several ways to convert binary data into ternary, e.g.

  • the conversion used as part of the Kerl hash function encoding chunks of 48 bytes as 242 trits,
  • or by encoding each bit as one trit with the corresponding value.

The current client libraries do not provide any functionality to convert an arbitrary amount of bytes into trits. The closest available functionality is the ASCII to trit conversion, which is used for human-readable messages in transactions:

T ← []
foreach char c in S:
  first ← uint8(c) mod 27
  second ← (uint8(c)-first) / 27
  T ← T || IntToTrits(first, 3) || IntToTrits(second, 3)

This function can be adapted to encode any general byte string. However, the conversion seems rather arbitrary and the algorithm is computationally more intense than the proposed solution. On the other hand, using the algorithm from this RFC also for the conversion of ASCII messages would break backward compatibility, which is also undesirable.

Each conversion method has different advantages and disadvantages. However, since the t5b1 encoding is well-defined and has been used in IRI for both network communications and storage layers for a long time, choosing the direct counterpart for the opposite conversion represents the most logical solution providing a nice balance between performance and memory-efficiency.

Reference implementation

Example Go implementation in wollac/iota-crypto-demo:

Copyright

Copyright and related rights waived via CC0.

tip: 6
title: Tangle Message
description: Generalization of the Tangle transaction concept
author: Gal Rogozinski (@GalRogozinski) 
discussions-to: https://github.com/iotaledger/tips/pull/17
status: Replaced
type: Standards
layer: Core
created: 2020-07-28
superseded-by: TIP-24

Summary

The Tangle is the graph data structure behind IOTA. In the current IOTA protocol, the vertices of the Tangle are represented by transactions. This document proposes an abstraction of this idea where the vertices are generalized messages, which then contain the transactions or other structures that are processed by the IOTA protocol. Just as before, each message directly approves other messages, which are known as parents.

The messages can contain payloads. These are core payloads that will be processed by all nodes as part of the IOTA protocol. Some payloads may have other nested payloads embedded inside. Hence, parsing is done layer by layer.

Motivation

To better understand this layered design, consider the Internet Protocol (IP), for example: There is an Ethernet frame that contains an IP payload. This in turn contains a TCP packet that encapsulates an HTTP payload. Each layer has a certain responsibility and once this responsibility is completed, we move on to the next layer.

The same is true with how messages are parsed. The outer layer of the message enables the mapping of the message to a vertex in the Tangle and allow us to perform some basic validation. The next layer may be a transaction that mutates the ledger state, and one layer further may provide some extra functionality on the transactions to be used by applications.

By making it possible to add and exchange payloads, an architecture is being created that can easily be extended to accommodate future needs.

Detailed design

Structure

Message ID

The Message ID is the BLAKE2b-256 hash of the entire serialized message.

Serialized layout

The following table describes the serialization of a Message following the notation from RFC-0041:

Name Type Description
Network ID uint64 Network identifier. This field denotes whether the message was meant for mainnet, testnet, or a private net. It also marks what protocol rules apply to the message. Usually, it will be set to the first 8 bytes of the BLAKE2b-256 hash of the concatenation of the network type and the protocol version string.
Parents Count uint8 The number of messages that are directly approved.
Parents anyOf
Parent
References another directly approved message.
Name Type Description
Message ID ByteArray[32] The Message ID of the parent.
Payload Length uint32 The length of the following payload in bytes. A length of 0 means no payload will be attached.
Payload oneOf
Generic Payload
An outline of a generic payload
Name Type Description
Payload Type uint32 The type of the payload. It will instruct the node how to parse the fields that follow.
Data Fields ANY A sequence of fields, where the structure depends on Payload Type.
Nonce uint64 The nonce which lets this message fulfill the PoW requirement.

Message validation

The following criteria defines whether the message passes the syntactical validation:

  • The total message size must not exceed 32 KiB (32 * 1024 bytes).
  • Parents:
    • Parents Count must be at least 1 and not larger than 8.
    • Parents must be sorted in lexicographical order.
    • Each Message ID must be unique.
  • Payload (if present):
    • Payload Type must match one of the values described under Payloads.
    • Data fields must be correctly parsable in the context of the Payload Type.
    • The payload itself must pass syntactic validation.
  • Nonce must be a valid solution of the message PoW as described in TIP-12.
  • There must be no trailing bytes after all message fields have been parsed.

Payloads

While messages without a payload, i.e. Payload Length set to zero, are valid, such messages do not contain any information. As such, messages usually contain a payload. The detailed specification of each payload type is out of scope of this RFC. The following table lists all currently specified payloads that can be part of a message and links to their specification. The indexation payload will be specified here as an example:

Payload NameType ValueTIP
Transaction0TIP-7
Milestone1TIP-8
Indexation2TIP-6

Indexation payload

This payload allows the addition of an index to the encapsulating message, as well as some arbitrary data. Nodes will expose an API that allows to query messages by index.

The structure of the indexation payload is as follows:

NameTypeDescription
Payload Typeuint32Set to value 2 to denote an Indexation Payload.
Index Lengthuint16The length of the following index field in bytes.
IndexByteArray[Index Length]The index key of the message
DataByteArrayBinary data.

Note that Index field must be at least 1 byte and not longer than 64 bytes for the payload to be valid. The Data may have a length of 0.

Example

Below is the full serialization of a valid message with an indexation payload. The index is the "IOTA" ASCII string and the data is the "hello world" ASCII string. Bytes are expressed as hexadecimal numbers.

  • Network ID (8-byte): 0000000000000000 (0)
  • Parents Count (1-byte): 02 (2)
  • Parents (64-byte):
    • 210fc7bb818639ac48a4c6afa2f1581a8b9525e20fda68927f2b2ff836f73578
    • db0fa54c29f7fd928d92ca43f193dee47f591549f597a811c8fa67ab031ebd9c
  • Payload Length (4-byte): 19000000 (25)
  • Payload (25-byte):
    • Payload Type (4-byte): 02000000 (2)
    • Index Length (2-byte): 0400 (4)
    • Index (4-byte): 494f5441 ("IOTA")
    • Data (15-byte):
      • Length (4-byte): 0b000000 (11)
      • Data (11-byte): 68656c6c6f20776f726c64 ("hello world")
  • Nonce (8-byte): ce6d000000000000 (28110)

Rationale and alternatives

Instead of creating a layered approach, we could have simply created a flat transaction message that is tailored to mutate the ledger state, and try to fit all the use cases there. For example, with the indexed data use case, we could have filled some section of the transaction with that particular data. Then, this transaction would not correspond to a ledger mutation but instead only carry data.

This approach seems less extensible. It might have made sense if we had wanted to build a protocol that is just for ledger mutating transactions, but we want to be able to extend the protocol to do more than that.

Copyright

Copyright and related rights waived via CC0.

tip: 7
title: Transaction Payload
description: UTXO-based transaction structure
author: Luca Moser (@luca-moser) 
discussions-to: https://github.com/iotaledger/tips/pull/18
status: Replaced
type: Standards
layer: Core
created: 2020-07-10
superseded-by: TIP-20

Summary

In the current IOTA protocol, transactions are grouped into so-called bundles to assure that they can only be confirmed as one unit. This TIP proposes a new UTXO-based transaction structure containing all the inputs and outputs of a transfer. Specifically, this TIP defines a transaction payload for the messages described in the IOTA protocol TIP-6.

Motivation

Currently, the vertices of the Tangle are represented by transactions, where each transaction defines either an input or output. A grouping of those input/output transaction vertices makes up a bundle which transfers the given values as an atomic unit (the entire bundle is applied or none of it). An applied bundle consumes the input transactions' funds and creates the corresponding deposits into the output transactions' target addresses. Furthermore, additional meta transactions can be part of the bundle to carry parts of the signature which do not fit into a single input transaction.

The bundle concept has proven to be very challenging in practice because of the following issues:

  • Since the data making up the bundle is split across multiple vertices, it complicates the validation of the entire transfer. Instead of being able to immediately tell whether a bundle is valid or not, a node implementation must first collect all parts of the bundle before any actual validation can happen. This increases the complexity of the node implementation.
  • Reattaching the tail transaction of a bundle causes the entire transfer to be reapplied.
  • Due to the split across multiple transaction vertices and having to do PoW for each of them, a bundle might already be lazy in terms of where it attaches, reducing its chances to be confirmed.

To fix the problems mentioned above and to create a more flexible transaction structure, the goal is to achieve a self-contained transaction structure defining the data of the entire transfer as a payload to be embedded into a message.

The new transaction structure should fulfil the following criteria:

  • Support for Ed25519 (and thus reusable addresses).
  • Support for adding new types of signature schemes, addresses, inputs, and outputs as part of protocol upgrades.
  • Self-contained, as in being able to validate the transaction immediately after receiving it.
  • Enable unspent transaction outputs (UTXO) as inputs instead of an account based model.

Detailed design

UTXO

The unspent transaction output (UTXO) model defines a ledger state where balances are not directly associated to addresses but to the outputs of transactions. In this model, transactions reference outputs of previous transactions as inputs, which are consumed (removed) to create new outputs. A transaction must consume all the funds of the referenced inputs.

Using a UTXO based model provides several benefits:

  • Parallel validation of transactions.
  • Easier double-spend detection, since conflicting transactions would reference the same UTXO.
  • Replay-protection which is important when having reusable addresses. Replaying the same transaction would manifest itself as already being applied or existent and thus not have any impact.
  • Technically seen, balances are no longer associated to addresses which raises the level of abstraction and thus enables other types of outputs with particular unlock criteria.

Within a transaction using UTXOs, inputs and outputs make up the to-be-signed data of the transaction. The section unlocking the inputs is called the unlock block. An unlock block may contain a signature proving ownership of a given input's address and/or other unlock criteria.

The following image depicts the flow of funds using UTXO:

UTXO flow

Structure

Serialized layout

A Transaction Payload is made up of two parts:

  1. The Transaction Essence part which contains the inputs, outputs and an optional embedded payload.
  2. The Unlock Blocks which unlock the inputs of the Transaction Essence. When an unlock block contains a signature, it signs the entire Transaction Essence part.

All values are serialized in little-endian encoding. The serialized form of the transaction is deterministic, meaning the same logical transaction always results in the same serialized byte sequence.

The Transaction ID is the BLAKE2b-256 hash of the entire serialized payload data including signatures.

The following table describes the entirety of a Transaction Payload in its serialized form following the notation from TIP-21.

Name Type Description
Payload Type uint32 Set to value 0 to denote a Transaction Payload.
Essence oneOf
Transaction Essence
Describes the essence data making up a transaction by defining its inputs, outputs and an optional payload.
Name Type Description
Transaction Type uint8 Set to value 0 to denote a Transaction Essence.
Inputs Count uint16 The number of input entries.
Inputs anyOf
UTXO Input
Describes an input which references an unspent transaction output to consume.
Name Type Description
Input Type uint8 Set to value 0 to denote an UTXO Input.
Transaction ID ByteArray[32] The BLAKE2b-256 hash of the transaction payload containing the referenced output.
Transaction Output Index uint16 The output index of the referenced output.
Outputs Count uint16 The number of output entries.
Outputs anyOf
SigLockedSingleOutput
Describes a deposit to a single address which is unlocked via a signature.
Name Type Description
Output Type uint8 Set to value 0 to denote a SigLockedSingleOutput.
Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
Address ByteArray[32] The raw bytes of the Ed25519 address which is the BLAKE2b-256 hash of the public key.
Amount uint64 The amount of tokens to deposit.
SigLockedDustAllowanceOutput
Describes a deposit which as a special property also alters the dust allowance of the target address.
Name Type Description
Output Type uint8 Set to value 1 to denote a SigLockedDustAllowanceOutput.
Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
Address ByteArray[32] The raw bytes of the Ed25519 address which is the BLAKE2b-256 hash of the public key.
Amount uint64 The amount of tokens to deposit.
Payload Length uint32 The length in bytes of the optional payload.
Payload optOneOf
Generic Payload
An outline of a generic payload.
Name Type Description
Payload Type uint32 The type of the payload. It will instruct the node how to parse the fields that follow.
Data Fields ANY A sequence of fields, where the structure depends on Payload Type.
Unlock Blocks Count uint16 The number of unlock block entries. It must match the field Inputs Count.
Unlock Blocks anyOf
Signature Unlock Block
Defines an unlock block containing a signature.
Name Type Description
Unlock Type uint8 Set to value 0 to denote a Signature Unlock Block.
Signature oneOf
Ed25519 Signature
Name Type Description
Signature Type uint8 Set to value 0 to denote an Ed25519 Signature.
Public key ByteArray[32] The Ed25519 public key of the signature.
Signature ByteArray[64] The Ed25519 signature signing the Blake2b-256 hash of the serialized Transaction Essence.
Reference Unlock Block
References a previous unlock block, where the same unlock block can be used for multiple inputs.
Name Type Description
Unlock Type uint8 Set to value 1 to denote a Reference Unlock Block.
Reference uint16 Represents the index of a previous unlock block.

Transaction parts

In general, all parts of a Transaction Payload begin with a byte describing the type of the given part. This improves the flexibility to introduce new types/versions of the given part in the future.

Transaction Essence data

The Transaction Essence of a Transaction Payload carries the inputs, outputs, and an optional payload. The Transaction Essence is an explicit type and therefore starts with its own Transaction Essence Type byte which is of value 0.

Inputs

The Inputs part holds the inputs to consume in order to fund the outputs of the Transaction Payload. Currently, there is only one type of input, the UTXO Input. In the future, more types of inputs may be specified as part of protocol upgrades.

Each input must be accompanied by a corresponding Unlock Block at the same index in the Unlock Blocks part of the Transaction Payload.

UTXO Input

A UTXO Input is an input which references an unspent output of a previous transaction. This UTXO is uniquely defined by the Transaction ID of that transaction together with corresponding output index. Each UTXO Input must be accompanied by an Unlock Block that is allowed to unlock the output the UTXO Input is referencing.

Example: If the input references an output to an Ed25519 address, then the corresponding unlock block must be of type Signature Unlock Block holding an Ed25519 signature.

Outputs

The Outputs part holds the outputs that are created by this Transaction Payload. The following output types are supported:

SigLockedSingleOutput

The SigLockedSingleOutput defines an output (with a certain amount) to a single target address which is unlocked via a signature proving ownership over the given address. This output supports addresses of different types.

SigLockedDustAllowanceOutput

The SigLockedDustAllowanceOutput works in the same way as a SigLockedSingleOutput but additionally controls the dust allowance on the target address. See TIP-14 for further information.

Payload

The Transaction Essence itself can contain another payload as described in general in TIP-6. The semantic validity of the encapsulating Transaction Payload does not have any impact on the payload.

The following table lists all the payload types that can be nested inside a Transaction Essence as well as links to the corresponding specification:

NameType ValueTIP
Indexation2TIP-6

Unlock Blocks

The Unlock Blocks part holds the unlock blocks unlocking inputs within a Transaction Essence. The following types of unlock blocks are supported:

Signature Unlock Block

A Signature Unlock Block defines an Unlock Block which holds a signature signing the BLAKE2b-256 hash of the Transaction Essence (including the optional payload).

Reference Unlock block

A Reference Unlock Block defines an Unlock Block which references a previous Unlock Block (which must not be another Reference Unlock Block). It must be used if multiple inputs can be unlocked via the same Unlock Block.

Example: Consider a Transaction Essence containing the UTXO Inputs 0, 1 and 2, where 0 and 2 are both spending outputs belonging to the same Ed25519 address A and 1 is spending from a different address B. This results in the following structure of the Unlock Blocks part:

IndexUnlock Block
0A Signature Unlock Block holding the Ed25519 signature for address A.
1A Signature Unlock Block holding the Ed25519 signature for address B.
2A Reference Unlock Block which references 0, as both require the same signature for A.

Validation

A Transaction Payload has different validation stages, since some validation steps can only be executed when certain information has (or has not) been received. We therefore distinguish between syntactic and semantic validation:

Syntactic validation

Syntactic validation is checked as soon as the transaction data has been received in its entirety. It validates the structure but not the signatures of the transaction. If the transaction does not pass this stage, it must not be broadcasted further and can be discarded right away.

The following criteria defines whether a payload passes the syntactical validation:

  • Essence:
    • Transaction Type value must denote a Transaction Essence.
    • Inputs:
      • Inputs Count must be 0 < x ≤ 127.
      • For each input the following must be true:
        • Input Type must denote a UTXO Input.
        • Transaction Output Index must be 0 ≤ x < 127.
      • Inputs must be sorted in lexicographical order of their serialized form.1
      • Each pair of Transaction ID and Transaction Output Index must be unique in the inputs set.
    • Outputs:
      • Outputs Count must be 0 < x ≤ 127.
      • For each input the following must be true:
        • Output Type must denote a SigLockedSingleOutput or a SigLockedDustAllowanceOutput.
        • Address Type must denote an Ed25519 Address.
        • Amount must be larger than zero.
      • Outputs must be sorted in lexicographical order of their serialized form.1
      • Each Address must be unique per output type. For example, a SigLockedSingleOutput and a SigLockedDustAllowanceOutput can have the same address, but not two SigLockedSingleOutputs.
      • The sum of all Amount fields must not exceed the total IOTA supply of 2,779,530,283,277,761.
    • Payload (if present):
      • Payload Type must match one of the values described under Payload.
      • Data fields must be correctly parsable in the context of the Payload Type.
      • The payload itself must pass syntactic validation.
  • Unlock Blocks:
    • Unlock Blocks Count must match Inputs Count of the Transaction Essence.
    • Each Unlock Type must denote a Signature Unlock Block or a Reference Unlock Block.
    • Each Signature Unlock Block must contain an Ed25519 Signature.
    • Each Signature Unlock Block must be unique.
    • A Reference Unlock Block at index i must have Reference < i and the unlock block at index Reference must be a Signature Unlock Block.
  • Given the type and length information, the Transaction Payload must consume the entire byte array of the Payload field of the encapsulating object.

1 ensures that serialization of the transaction becomes deterministic, meaning that libraries always produce the same bytes given the logical transaction.

Semantic validation

The Semantic validation of a Transaction Payload is performed when its encapsulating message is confirmed by a milestone. The semantic validity of transactions depends on the order in which they are processed. Thus, it is necessary that all the nodes in the network perform the checks in the same order, no matter the order in which the transactions are received. This is assured by using the White-Flag ordering as described in TIP-2.

Processing transactions according to the White-Flag ordering enables users to spend UTXOs which are created in the same milestone confirmation cone, as long as the spending transaction comes after the funding transaction in the aforementioned White-Flag order. In this case, it is recommended that users include the Message ID of the funding transaction as a parent of the message containing the spending transaction.

The following criteria defines whether a payload passes the semantic validation:

  • Each input must reference a valid UTXO, i.e. the output referenced by the input's Transaction ID and Transaction Output Index is known (booked) and unspent.
  • The transaction must spend the entire balance, i.e. the sum of the Amount fields of all the UTXOs referenced by inputs must match the sum of the Amount fields of all outputs.
  • Each unlock block must be valid with respect to the UTXO referenced by the input of the same index:
    • If it is a Signature Unlock Block:
      • The Signature Type must match the Address Type of the UTXO,
      • the BLAKE2b-256 hash of Public Key must match the Address of the UTXO and
      • the Signature field must contain a valid signature for Public Key.
    • If it is a Reference Unlock Block, the referenced Signature Unlock Block must be valid with respect to the UTXO.

If a Transaction Payload passes the semantic validation, its referenced UTXOs must be marked as spent and its new outputs must be created/booked in the ledger. The Message ID of the message encapsulating the processed payload then also becomes part of the input for the White-Flag Merkle tree hash of the confirming milestone (TIP-4).

Transactions that do not pass semantic validation are ignored. Their UTXOs are not marked as spent and their outputs are not booked in the ledger.

Miscellaneous

Transaction timestamps

Since transaction timestamps – whether they are signed or not – do not provide any guarantee of correctness, they have been left out of the Transaction Payload. Applications relying on some notion of time for transactions can use the local solidification time or the global timestamp of the confirming milestone (TIP-6).

Address reuse

While, in contrast to Winternitz one-time signatures (W-OTS), producing multiple Ed25519 signatures for the same private key and address does not decrease its security, it still drastically reduces the privacy of users. It is thus considered best practice that applications and services create a new address per deposit to circumvent these privacy issues.

In essence, Ed25519 support allows for smaller transaction sizes and to safely spend funds which were sent to an already used deposit address. Ed25519 addresses are not meant to be used like email addresses. See this Bitcoin wiki article for further information.

Drawbacks

  • The new transaction format is the core data type within the IOTA ecosystem. Changing it means that all projects need to accommodate it, including wallets, web services, client libraries and applications using IOTA in general. It is not possible to keep these changes backwards compatible, meaning that all nodes must upgrade to further participate in the network.
  • Additionally, local snapshots can no longer be represented by a list of addresses and their balances, since the ledger is now made up of the UTXOs on which the actual funds reside. Therefore, local snapshot file schemes have to be adjusted to incorporate the transaction hashes, output indices, and then the destination addresses including the balances.

Rationale and alternatives

  • Introducing this new transaction structure allows for extensions in the future, to accommodate new requirements. With the support for Ed25519 addresses/signatures, transaction size is drastically reduced and allows for safe re-signing in case of address reuse. Due to the switch to a complete binary transaction, the transaction size is reduced even further, saving network bandwidth and processing time.
  • Other transaction structures have been considered but they would have misused existing transaction fields to accommodate for new features, instead of putting them into a proper descriptive structure. Additionally, those ideas would not have been safe against replay attacks, which deems reusing the old transaction structure, for example for Ed25519 addresses/signatures, as infeasible.
  • Not switching to the new transaction structure described in this RFC would have led to more people losing funds because of W-OTS address reuse and it would prevent extending the IOTA protocol further down the line.

Copyright

Copyright and related rights waived via CC0.

tip: 8
title: Milestone Payload
description: Coordinator issued milestone structure with Ed25519 authentication
author: Angelo Capossele (@capossele) , Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/19
status: Replaced
type: Standards
layer: Core
created: 2020-07-28
superseded-by: TIP-29

Summary

In IOTA, nodes use the milestones issued by the Coordinator to reach a consensus on which transactions are confirmed. This RFC proposes a milestone payload for the messages described in the IOTA protocol TIP-6. It uses Edwards-curve Digital Signature Algorithm (EdDSA) to authenticate the milestones.

Motivation

In the current IOTA protocol, milestones are authenticated using a ternary Merkle signature scheme. In the Chrysalis update, ternary transactions are replaced with binary messages containing different payload types. In order to address these new requirements, this RFC proposes the use of a dedicated payload type for milestones. It contains the same essential data fields that were previously included in the milestone bundle. Additionally, this document also describes how Ed25519 signatures are used to assure authenticity of the issued milestones. In order to make the management and security of the used private keys easier, simple multisignature features with support for key rotation have been added.

Detailed design

The BLAKE2b-256 hash of the Milestone Essence, consisting of the actual milestone information (like its index number or position in the tangle), is signed using the Ed25519 signature scheme as described in the IRTF RFC 8032. It uses keys of 32 bytes, while the generated signatures are 64 bytes.

To increase the security of the design, a milestone can (optionally) be independently signed by multiple keys at once. These keys should be operated by detached signature provider services running on independent infrastructure elements. This assist in mitigating the risk of an attacker having access to all the key material necessary for forging milestones. While the Coordinator takes responsibility for forming Milestone Payload Messages, it delegates signing in to these providers through an ad-hoc RPC connector. Mutual authentication should be enforced between the Coordinator and the signature providers: a client-authenticated TLS handshake scheme is advisable. To increase the flexibility of the mechanism, nodes can be configured to require a quorum of valid signatures to consider a milestone as genuine.

In addition, a key rotation policy can also be enforced by limiting key validity to certain milestone intervals. Accordingly, nodes need to know which public keys are applicable for which milestone index. This can be provided by configuring a list of entries consisting of the following fields:

  • Index Range providing the interval of milestone indices for which this entry is valid. The interval must not overlap with any other entry.
  • Applicable Public Keys defining the set of valid public keys.
  • Signature Threshold specifying the minimum number of valid signatures. Must be at least one and not greater than the number of Applicable Public Keys.

Structure

The following table describes the entirety of a Milestone Payload in its serialized form following the notation from TIP-21:

Name Type Description
Payload Type uint32 Set to value 1 to denote a Milestone Payload.
Essence oneOf
Milestone Essence
Describes the signed part of a Milestone Payload.
Name Type Description
Index Number uint32 The index number of the milestone.
Timestamp uint64 The Unix time (seconds since Unix epoch) at which the milestone was issued.
Parents Count uint8 The number of messages that are directly approved.
Parents anyOf
Parent
References another directly approved message.
Name Type Description
Message ID ByteArray[32] The Message ID of the parent.
Inclusion Merkle Root ByteArray[32] The Merkle tree hash (BLAKE2b-256) of the message IDs of all the not-ignored state-mutating transaction payloads referenced by the milestone (RFC-0012).
Next PoW Score uint32 The new PoW score all messages should adhere to. If 0 then the PoW score should not change.
Next PoW Score Milestone Index uint32 The index of the first milestone that will require a new minimal pow score for applying transactions. This field comes into effect only if the Next PoW Score field is not 0.
Keys Count uint8 Number of public keys entries.
Keys anyOf
Ed25519 Public Key
Name Type Description
Public Key ByteArray[32] The public key of the Ed25519 keypair which is used to verify the correspondig signature.
Payload Length uint32 The length in bytes of the optional payload.
Payload optOneOf
Generic Payload
An outline of a generic payload
Name Type Description
Payload Type uint32 The type of the payload. It will instruct the node how to parse the fields that follow.
Data Fields ANY A sequence of fields, where the structure depends on Payload Type.
Signatures Count uint8 Number of signature entries. The number must match the field Keys Count.
Signatures anyOf
Raw Ed25519 Signature
Name Type Description
Signature ByteArray[64] The Ed25519 signature signing the BLAKE2b-256 hash of the serialized Milestone Essence. The signatures must be in the same order as the specified public keys.

Generation

  • Generate a new Milestone Essence corresponding to the Coordinator milestone.
  • Transmit the serialized Milestone Essence to the corresponding number of signature service providers.
    • The signature provider service will sign the received serialized bytes as-is.
    • The signature provider will serialize the signature bytes and return them to the Coordinator.
  • Fill the Signatures field of the milestone payload with the received signature bytes.
  • Generate a Message as defined in TIP-6 using the same Parents as in the created Milestone Payload.

Syntactical validation

  • Parents of the payload must match Parents of the encapsulating Message.
  • PoW score:
    • If Next Pow Score is zero, Next PoW Score Milestone Index must also be zero.
    • Otherwise Next PoW Score Milestone Index must be larger than Index Number.
  • Keys:
    • Keys Count must be at least the Signature Threshold and at most the number of Applicable Public Keys for the current milestone index.
    • Keys must be sorted in lexicographical order.
    • Each Public Key must be unique.
    • Keys must form a subset of the Applicable Public Keys for the current milestone index.
  • Payload (if present):
    • Payload Type must match one of the values described under Payloads.
    • Data fields must be correctly parsable in the context of the Payload Type.
    • The payload itself must pass syntactic validation.
  • Signatures:
    • Signatures Count must match Keys Count.
    • Signature at index i must be valid with respect to the Public Key at the same index.
  • Given the type and length information, the Milestone Payload must consume the entire byte array of the Payload field of the Message.

Payloads

The Milestone Payload itself can contain another payload as described in general in TIP-6. The following table lists all the payloads types that can be nested inside a Milestone Payload as well as links to the corresponding specification:

Payload NameType ValueTIP
Receipts4TIP-15

Rationale and alternatives

  • Instead of using EdDSA we could have chosen ECDSA. Both algorithms are well supported and widespread. However, signing with ECDSA requires fresh randomness while EdDSA does not. Especially in the case of milestones where essences are signed many times using the same key, this is a crucial property.
  • Due to the layered design of messages and payloads, it is practically not possible to prevent reattachments of Milestone Payloads. Hence, this payload has been designed in a way to be independent from the message it is contained in. A milestone should be considered as a virtual marker (referencing Parents) rather than an actual message in the Tangle. This concept is compatible with reattachments and supports a cleaner separation of the message layers.
  • Forcing matching Parents in the Milestone Payload and its Message makes it impossible to reattach the same payload at different positions in the Tangle. This does not prevent reattachments in general (a different, valid Nonce, for example would lead to a new Message ID) and it violates a strict separation of payload and message. However, it simplifies milestone processing as the position of the Message will be the same as the possition encoded in the Milestone Payload. Having this clear structural properties seem to be more desirable than a strict separation of layers.

Copyright

Copyright and related rights waived via CC0.

tip: 9
title: Local Snapshot File Format
description: File format to export/import ledger state 
author: Luca Moser (@luca-moser) 
discussions-to: https://github.com/iotaledger/tips/pull/25
status: Replaced
type: Standards
layer: Interface
created: 2020-08-25
superseded-by: TIP-35

Summary

This RFC defines a file format for local snapshots which is compatible with Chrysalis Phase 2.

Motivation

Nodes create local snapshots to produce ledger representations at a point in time of a given milestone to be able to:

  • Start up from a recent milestone instead of having to synchronize from the genesis transaction.
  • Delete old transaction data below a given milestone.

Current node implementations use a local snapshot file format which only works with account based ledgers. For Chrysalis Phase 2, this file format has to be assimilated to support a UTXO based ledger.

Detailed design

Since a UTXO based ledger is much larger in size, this RFC proposes two formats for snapshot files:

  • A full format which represents a complete ledger state.
  • A delta format which only contains diffs (created and consumed outputs) of milestones from a given milestone index onwards.

This separation allows nodes to swiftly create new delta snapshot files, which then can be distributed with a companion full snapshot file to reconstruct a recent state.

Unlike the current format, these new formats do not include spent addresses since this information is no longer held by nodes.

Formats

All types are serialized in little-endian

Full Ledger State

A full ledger snapshot file contains the UTXOs (outputs section) of a node's confirmed milestone (ledger_milestone_index). The diffs contain the diffs to rollback the outputs state to regain the ledger state of the snapshot milestone at (seps_milestone_index).

While the node producing such a full ledger state snapshot could theoretically pre-compute the actual snapshot milestone state, this is deferred to the consumer of the data to speed up local snapshot creation.

Delta Ledger State

A delta ledger state local snapshot only contains the diffs of milestones starting from a given ledger_milestone_index. A node consuming such data must know the state of the ledger at ledger_milestone_index.

Schema

Output

Defines an output.

Name Type Description
Message Hash Array<byte>[32] The hash of the message in which the transaction was contained which generated this output.
Transaction hash Array<byte>[32] The hash of the transaction which generated this output.
Output index uint16 The index of this output within the transaction.
Output oneOf
SigLockedSingleDeposit
Name Type Description
Output Type byte Set to value 0 to denote a SigLockedSingleDeposit.
Address oneOf
Ed25519 Address
Name Type Description
Address Type byte/varint Set to value 0 to denote an Ed25519 Address.
Address ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Amount uint64 The amount of tokens this output deposits.
SigLockedDustAllowanceDeposit
Name Type Description
Output Type byte Set to value 1 to denote a SigLockedDustAllowanceDeposit.
Address oneOf
Ed25519 Address
Name Type Description
Address Type byte/varint Set to value 0 to denote an Ed25519 Address.
Address ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Amount uint64 The amount of tokens this output deposits.
Milestone Diff

Defines the diff a milestone produced by listing the created/consumed outputs and the milestone payload itself.

Name Type Description
Milestone Payload length uint32 Denotes the length of the milestone payload.
Milestone Payload Array<byte>[Milestone Payload length] The milestone payload in its serialized binary form.
Treasury Input
only included if milestone contains a receipt
Name Type Description
Treasury Input Milestone Hash Array<byte>[32] The hash of the milestone this input references.
Treasury Input Amount uint64 The amount of this treasury input.
Created Outputs Count uint64 The amount of outputs generated with this milestone diff.
Created Outputs anyOf
Output
Consumed Outputs Count uint64 The amount of outputs consumed with this milestone diff.
Consumed Outputs anyOf
Output
Full snapshot file format

Defines what a full snapshot file contains.

Name Type Description
Version byte Denotes the version of this file format.
Type byte Denotes the type of this file format. Value 0 denotes a full snapshot.
Timestamp uint64 The UNIX timestamp in seconds of when this snapshot was produced.
Network ID uint64 The ID of the network to which this snapshot is compatible.
SEPs milestone index uint64 The milestone index for which the SEPs were generated for.
Ledger milestone index uint64 The milestone index of which the UTXOs within the snapshot are from.
SEPs count uint64 The amount of SEPs contained within this snapshot.
Outputs count uint64 The amount of UTXOs contained within this snapshot.
Milestone diffs count uint64 The amount of milestone diffs contained within this snapshot.
Treasury Output Milestone Hash Array<byte>[32] The milestone hash of the milestone which generated the treasury output.
Treasury Output Amount uint64 The amount of funds residing on the treasury output.
SEPs
SEP Array<byte>[32]
Outputs
Output
Milestone Diffs
Milestone Diff
Delta snapshot file format

Defines what a delta snapshot contains.

Name Type Description
Version byte Denotes the version of this file format.
Type byte Denotes the type of this file format. Value 1 denotes a delta snapshot.
Timestamp uint64 The UNIX timestamp in seconds of when this snapshot was produced.
Network ID uint64 The ID of the network to which this snapshot is compatible.
SEPs milestone index uint64 The milestone index for which the SEPs were generated for.
Ledger milestone index uint64 The milestone index up on which this delta snapshot builts up from.
SEPs count uint64 The amount of SEPs contained within this snapshot.
Milestone diffs count uint64 The amount of milestone diffs contained within this snapshot.
SEPs
SEP Array<byte>[32]
Milestone Diffs
Milestone Diff

Drawbacks

Nodes need to support this new format.

Rationale and alternatives

  • In conjunction with a companion full snapshot, a tool or node can "truncate" the data from a delta snapshot back to a single full snapshot. In that case, the ledger_milestone_index and seps_milestone_index would be the same. In the example above, given the full and delta snapshots, one could produce a new full snapshot for milestone 1350.
  • Since snapshots may include millions of UTXOs, code generating such files needs to stream data directly onto disk instead of keeping the entire representation in memory. In order to facilitate this, the count denotations for SEPs, UTXOs and diffs are at the beginning of the file. This allows code generating snapshot files to only have to seek back once after the actual count of elements is known.

Unresolved questions

  • Is all the information to startup a node from the local snapshot available with the described format?

Copyright

Copyright and related rights waived via CC0.

tip: 10
title: Mnemonic Ternary Seed
description: Represent ternary seed as a mnemonic sentence
author: Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/10
status: Obsolete
type: Standards
layer: IRC
created: 2020-03-11

Summary

The IOTA protocol uses a 81-tryte seed to derive all the private keys for one account. This RFC describes a method to represent that seed as a mnemonic sentence - a group of easily comprehensible words.

Motivation

The used seed is a 384-bit or 243-trit random string. There are several ways to represent this in a human-readable form, but a mnemonic sentence is superior to raw binary or ternary strings. These sentences can easily be written down or they can even be recorded over a phone. Furthermore, having raw strings tempts the user to copy and paste the seed due to convenience over security. This practice opens new attack vectors such as theft or manipulation of the string in the clipboard.

Detailed design

The BIP-0039 specification exactly describes an implementation for its use case and how to uniquely represent binary entropy using mnemonic words. However, it is only defined for binary input of 128 - 256 bits. The section(s) below describe the canonical extension of BIP-0039 for longer inputs of 81 trytes or 384 bits.

The 243-trit (81-tryte) seed is used as input for the Kerl hash function to derive the private keys. Therefore, it is first converted to a 384-bit string to be absorbed by Keccak-384. As the set of all possible 243-trit strings is larger than the set of 384-bit strings, the most significant trit is fixed to zero before converting. This means that the 243rd of the seed is ignored and does not have any impact on the following key derivation and does not need to be considered for the encoding.

Generating the mnemonic from seed

  • Interpret the IOTA seed as a little-endian 243-trit balanced ternary integer; assure that its most significant trit is 0 and encode the number as a 384-bit signed integer in big-endian two's complement representation. This exact conversion is also used as part of the current Kerl hash function.
  • Compute the SHA-256 hash of the resulting bit string and use its first 384/32=12 bits as the checksum.
  • The 12-bit checksum is appended to the end of the initial result, making it a 396-bit string.
  • These concatenated bits are split into 36 groups of 11 bits, each encoding a number from 0-2047, corresponding to an index into the wordlist.
  • Finally, convert these numbers into words from any one of the BIP-0039 wordlists and use the joined words as a mnemonic sentence.

Generating the seed from mnemonic

  • Convert the 36-word mnemonic sentence into its corresponding 396-bit string by taking the 11-bit wordlist index for each word and concatenating all the bits.
  • Split the resulting bit string into 384-bit entropy and 12-bit checksum.
  • Verify that the checksum corresponds to the first bits of the SHA-256 hash of the entropy.
  • Convert the 384-bit entropy, interpreted as a signed integer in big-endian two's complement representation, back to a little-endian 243-trit balanced ternary integer. (The most significant trit will always be zero.) This corresponds to the usual 243-trit or 81-tryte IOTA seed.

Examples

  • Using the English word list:
    • IOTA seed (81-tryte): TLEV9HDTGZOXIIGA9DZG9VAKAIUZKNIMAFUGGARTWPOGDLLVUFZZVAABXRMFPWJAYWBBHOERV9EZBAOJD
    • mnemonic (36-word): forget small borrow baby wing law monkey fiber jealous canyon melt all order lift now fish mind index neither discover divert fit curtain raw wealth arrow frozen plug catalog public winner emerge pulse mixture cry arch
  • Using the Japanese word list:
    • IOTA seed (81-tryte): KMQDKKLGGTPUBRJXYWLMQOIA9WIEWUAAJPASYPVAWOTYYH9JESDKPLVZIWITHDIUMLFEWQUQ9LHAV9GHC
    • mnemonic (36-word): げどく まもる してい ていへん つめたい ちつじょ だいたい てうち まいにち さゆう よそく がはく ねらう いちおう くみあわせ ふいうち せつでん きせい すべて きひん しかい さぎょう うけたまわる つとめる おんしゃ きかい なやむ たいせつ うんこう むすめ いってい ふめつ そとづら つくね おいこす ききて

Drawbacks

  • This RFC describes a way to represent computer-generated randomness in a human-readable transcription. It is in no way meant to process user created sentences into a binary key. This technique is also sometimes called a "brain wallet" and must not be confused with these mnemonics.
  • The mnemonics only encode 384 bits of entropy which only covers 242 trits. The 243rd trit will not be encoded and always padded with 0. This is fine, when Kerl is used to derive the private keys, since the Kerl hash function only works on the first 242 trits itself. However, other - currently not used - key derivation functions relying on the full 243-trit entropy are not compatible with this RFC.

Rationale and alternatives

  • BIP-0039 provides an industry standard to present computer generated, secure entropy in a way that can be "processed" by humans in a much less error-prone way. The word lists are chosen in a way to reduce ambiguity, as such, typos can either be autocorrected or corrected with the help of a dictionary. This is in contrast to a raw ternary (or binary) representation, where typos automatically lead to a completely new seed, changing and breaking all successive private keys.
  • Thanks to the integrated 12-bit checksum, it is even possible to detect whether one or more words have been exchanged completely.
  • Presenting the user with a tryte or hex string will lead to situations in which the seed is copied into a text file, while human-readable words encourage the user to copy them on a piece of paper.

Unresolved questions

  • This RFC does not cover usability aspects of entering mnemonics. Forcing the user to enter a mnemonic sentence and then discarding the input, due to one easily correctable typo in one word, would almost be as frustrating as typing a tryte string. Therefore, this must be combined with different usability improvements, e.g. only allowing entering characters that lead to valid words or fixing the word as soon as it can be unambiguously identified.
  • The BIP-0039 specification includes several word lists for different languages. Should these word lists be allowed or is it sufficient to only use the English list?
  • The BIP-0039 specification only considers entropy between 128 and 256 bits, while this RFC extends it in an analogue way for 384 bits. Is it also relevant for certain use cases to extend this for 512 bits (or even longer)?

Reference implementation

Example Go implementation in wollac/iota-crypto-demo:

Copyright

Copyright and related rights waived via CC0.

tip: 11
title: Bech32 Address Format
description: Extendable address format supporting various signature schemes and address types
author: Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/20
status: Replaced
type: Standards
layer: Interface
created: 2020-07-28
superseded-by: TIP-31

Summary

This document proposes an extendable address format for the IOTA protocol supporting various signature schemes and address types. It relies on the Bech32 format to provide a compact, human-readable encoding with strong error correction guarantees.

Motivation

With Chrysalis, IOTA uses Ed25519 to generate digital signatures, in which addresses correspond to a BLAKE2b-256 hash. It is necessary to define a new universal and extendable address format capable of encoding different types of addresses.

The current IOTA protocol relies on Base27 addresses with a truncated Kerl checksum. However, both the character set and the checksum algorithm have limitations:

  • Base27 is designed for ternary and is ill-suited for binary data.
  • The Kerl hash function also requires ternary input. Further, it is slow and provides no error-detection guarantees.
  • It does not support the addition of version or type information to distinguish between different kinds of addresses with the same length.

All of these points are addressed in the Bech32 format introduced in BIP-0173: In addition to the usage of the human-friendly Base32 encoding with an optimized character set, it implements a BCH code that guarantees detection of any error affecting at most four characters and has less than a 1 in 109 chance of failing to detect more errors.

This RFC proposes a simple and extendable binary serialization for addresses of different types that is then Bech32 encoded to provide a unique appearance for human-facing applications such as wallets.

Detailed design

Binary serialization

The address format uses a simple serialization scheme which consists of two parts:

  • The first byte describes the type of the address.
  • The remaining bytes contain the type-specific raw address bytes.

Currently, only one kind of addresses are supported:

  • Ed25519, where the address consists of the BLAKE2b-256 hash of the Ed25519 public key.

They are serialized as follows:

TypeFirst byteAddress bytes
Ed255190x0032 bytes: The BLAKE2b-256 hash of the Ed25519 public key.

Bech32 for human-readable encoding

The human-readable encoding of the address is Bech32 (as described in BIP-0173). A Bech32 string is at most 90 characters long and consists of:

  • The human-readable part (HRP), which conveys the IOTA protocol and distinguishes between Mainnet (the IOTA token) and Testnet (testing version):
    • iota is the human-readable part for Mainnet addresses
    • atoi is the human-readable part for Testnet addresses
  • The separator, which is always 1.
  • The data part, which consists of the Base32 encoded serialized address and the 6-character checksum.

Hence, Ed25519-based addresses will result in a Bech32 string of 64 characters.

Examples

  • Mainnet
    • Ed25519 compressed public key (32-byte): 6f1581709bb7b1ef030d210db18e3b0ba1c776fba65d8cdaad05415142d189f8
      • BLAKE2b-256 hash (32-byte): efdc112efe262b304bcf379b26c31bad029f616ee3ec4aa6345a366e4c9e43a3
      • serialized (33-byte): 00efdc112efe262b304bcf379b26c31bad029f616ee3ec4aa6345a366e4c9e43a3
      • Bech32 string: iota1qrhacyfwlcnzkvzteumekfkrrwks98mpdm37cj4xx3drvmjvnep6xqgyzyx
  • Testnet
    • Ed25519 compressed public key (32-byte): 6f1581709bb7b1ef030d210db18e3b0ba1c776fba65d8cdaad05415142d189f8
      • BLAKE2b-256 hash (32-byte): efdc112efe262b304bcf379b26c31bad029f616ee3ec4aa6345a366e4c9e43a3
      • serialized (33-byte): 00efdc112efe262b304bcf379b26c31bad029f616ee3ec4aa6345a366e4c9e43a3
      • Bech32 string: atoi1qrhacyfwlcnzkvzteumekfkrrwks98mpdm37cj4xx3drvmjvnep6x8x4r7t

Drawbacks

  • The new addresses look fundamentally different from the established 81-tryte IOTA addresses. However, since the switch from binary to ternary and Chrysalis in general is a substantial change, this is a very reasonable and desired consequence.
  • A four character HRP plus one type byte only leaves a maximum of 48 bytes for the actual address.

Rationale and alternatives

  • There are several ways to convert the binary serialization into a human-readable format, e.g. Base58 or hexadecimal. The Bech32 format, however, offers the best compromise between compactness and error correction guarantees. A more detailed motivation can be found in BIP-0173 Motivation.
  • The binary serialization itself must be as compact as possible while still allowing you to distinguish between different address types of the same byte length. As such, the introduction of a version byte offers support for up to 256 different kinds of addresses at only the cost of one single byte.
  • The HRP of the Bech32 string offers a good opportunity to clearly distinguish IOTA addresses from other Bech32 encoded data. Here, any three or four character ASCII strings can be used. However, selecting iota as well as atoi seems like the most recognizable option.

Reference implementation

Example Go implementation in wollac/iota-crypto-demo:

Copyright

Copyright and related rights waived via CC0.

tip: 12
title: Message PoW
description: Define message proof-of-work as a means to rate-limit the network
author: Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/24
status: Active
type: Standards
layer: Core
created: 2020-08-25

Summary

The IOTA protocol uses proof-of-work as a means to rate-limit the network. Currently, the Curl-P-81 trinary hash function is used and is required to provide a hash with the matching number of trailing zero trits to issue a transaction to the Tangle. With Chrysalis, it will be possible to issue binary messages of arbitrary size. This RFC presents a proposal to adapt the existing PoW mechanism to these new requirements. It aims to be less disruptive to the current PoW mechanism as possible.

Motivation

In the current IOTA Protocol, each transaction has a fixed size of 8019 trits and is hashed using Curl-P-81 to compute its 243-trit transaction hash, where the PoW's difficulty equals the number of trailing zero trits in that hash.
Unfortunately, the performance of Curl-P-81 is slow, achieving only about 2 MB/s on a single core. This would make the PoW validation a bottleneck, especially for high usage scenarios with larger messages. Thus, this RFC proposes a two-stage approach to speed up the validation process: First, the BLAKE2b-256 hash function is used to create a short, fixed length digest of the message. Then, this digest, together with the nonce, is hashed using Curl-P-81. Since the digest only needs to be computed once while iterating over different nonce values, this preserves Curl as the PoW-relevant hash. However, the validation is much faster, as BLAKE2b-256 has a performance of about 1 GB/s and Curl then only needs to be executed for one single 243-trit block of input. Since the input of the final Curl computation is always fixed, parallel Curl variants can be used in this stage to further speed up the validation if necessary.
Furthermore, it is important to note that the time required to do the PoW depends on the PoW difficulty and not on the message length. As such, to treat messages with different lengths differently, we need to weight the PoW difficulty by the message length.

It will be easy to adapt existing hardware and software implementations of the current PoW mechanism to work with the proposed design. Only the input and the position of the nonce in the buffer needs to be adapted. This enables existing Curl projects to continue persisting and also the entire PoW landscape should stay almost the same.

Detailed design

The PoW score is defined as the average number of iterations required to find the number of trailing zero trits in the hash divided by the message size.

The PoW validation is performed in the following way:

  • Compute the BLAKE2b-256 hash of the serialized message (as described in TIP-6) excluding the 8-byte Nonce field and convert the hash into its 192-trit b1t6 encoding. (See TIP-5 for a description of the encoding.)
  • Take the 8-byte Nonce in little-endian representation, convert it into its 48-trit b1t6 encoding and append it to the hash trits.
  • Add a padding of three zero trits to create a 243-trit string.
  • Compute the Curl-P-81 hash.
  • Count the number of trailing zero trits in the hash.
  • Then, the PoW score equals 3#zeros / size(message).

This can also be summarized with the following pseudocode:

pow_digest ← BLAKE2b-256(serialized message excluding nonce field)
pow_hash ← Curl-P-81(b1t6(pow_digest) || b1t6(nonce) || [0, 0, 0])
pow ← 3**trailing_zeros(pow_hash) / size

where size is the number of bytes of the full serialized message.

Example

  • Message including nonce (21-byte): 48656c6c6f2c20576f726c64215ee6aaaaaaaaaaaa
  • PoW digest (32-byte): 511bc81dde11180838c562c82bb35f3223f46061ebde4a955c27b3f489cf1e03
  • Nonce (8-byte): 5ee6aaaaaaaaaaaa (12297829382473049694)
  • Curl input (81-tryte): 9C9AYYBATZQAXAH9BBVYQDYYPBDXNDWBHAO9ODPDFZTZTCAWKCLADXO9PWEYCAC9MCAZVXVXVXVXVXVX9
  • PoW hash (81-tryte): DJCAGKILZPLXNXWFTNXFLCHRFVUHHMTPFIOFKQXMGIKITSEVWECMQOKCFXDIIHK9YVHGQICAIVEVDJ999
  • Trailing zeros: 9
  • PoW score: 39 / 21 = 937.2857142857143

Drawbacks

  • Curl is a ternary hash function that is applied on binary data. This makes it necessary to introduce an additional encoding step. However, the proposed b1t6 encoding is reasonably performant. Additionally, hash functions usually contain an encoding step to write the input into their internal state. In that sense, the b1t6 encoding is not much different.
  • One additional trailing zero in the PoW hash effectively allows the message size to be tripled. This could potentially incentivize users to add otherwise unnecessary data, when the PoW difficulty stays the same. Using a binary hash function instead of Curl would only slightly improve this situation as the allowed message length remains exponential in the difficulty parameter.

Rationale and alternatives

The premise of this proposal is that the PoW should remain Curl-based to cause the least amount of disruption to the protocol and its established projects. Therefore, other hash functions or PoW algorithms have not been considered. However, modifications of the described calculation are possible:

  • There are several potential encodings for the nonce: E.g. converting its value directly to balanced ternary (the most compact encoding) or using the b1t8 encoding. The chosen b1t6 encoding achieves a nice balance between compactness and performance. Since it is possible to fit the PoW digest and the b1t6 encoded nonce into one Curl block, the simplicity of having only one encoding (for PoW digest and nonce) was preferred over minimal performance improvements other encodings could bring.
  • Curl can be computed directly on the b1t6 encoded message (after an appropriate padding has been added). However, performance analysis of existing node implementation suggests that the Curl computations during the PoW could become critical, especially since parallel Curl implementations would be much more difficult to deploy because of the dynamic message lengths.
  • BLAKE2b-256 could be replaced with BLAKE2b-512 or any other binary cryptographic hash function. However, a 256-bit digest fits very nicely into exactly one Curl block and since BLAKE2b-256 is also used for the message ID, it is reasonable to also use it for the PoW digest. This reduces the number of required hashing implementations and even allows reusage of intermediate values between the PoW digest and the message ID computation.

The PoW score formula 3#zeros / size(message) could be replaced with an alternative function to better match the network usage, e.g. in order to penalize longer message more than linearly.

Reference implementation

Example Go implementation in wollac/iota-crypto-demo:

Copyright

Copyright and related rights waived via CC0.

tip: 13
title: REST API
description: Node REST API routes and objects in OpenAPI Specification
author: Samuel Rufinatscha (@rufsam) 
discussions-to: https://github.com/iotaledger/tips/pull/26
status: Replaced
type: Standards
layer: Interface
created: 2020-09-10
superseded-by: TIP-25

Summary

This document proposes the REST API for nodes supporting the IOTA protocol.

API

The API is described using the OpenAPI Specification:

Swagger Editor

Copyright

Copyright and related rights waived via CC0.

tip: 14
title: Ed25519 Validation
description: Adopt https://zips.z.cash/zip-0215 to explicitly define Ed25519 validation criteria
author: Gal Rogozinski (@GalRogozinski) , Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/28
status: Active
type: Standards
layer: Core
created: 2020-10-30

Summary

The IOTA protocol uses Ed25519 signatures to assure the authenticity of transactions in Chrysalis. However, although Ed25519 is standardized in IETF RFC 8032, it does not define strict validation criteria. As a result, compatible implementations do not need to agree on whether a particular signature is valid or not. While this might be acceptable for classical message signing, it is unacceptable in the context of consensus critical applications like IOTA.

This RFC proposes to adopt ZIP-215 to explicitly define validation criteria. This mainly involves the following sections of the Ed25519 spec:

Motivation

Based on Chalkias et al. 2020 we know that:

  1. Not all implementations follow the decoding rules defined in RFC 8032, but instead accept non-canonically encoded inputs.
  2. The RFC 8032 provides two alternative verification equations, whereas one is stronger than the other. Different implementations use different equations and therefore validation results vary even across implementations that follow the RFC 8032.

This lack of consistent validation behavior is especially critical for IOTA as they can cause a breach of consensus across node implementations! For example, one node implementation may consider a particular transaction valid and mutate the ledger state accordingly, while a different implementation may discard the same transaction due to invalidity. This would result in a network fork and could only be resolved outside of the protocol. Therefore, an explicit and unambiguous definition of validation criteria, such as ZIP-215, is necessary.

Furthermore, it is important to note that the holder of the secret key can produce more than one valid distinct signature. Such transactions with the same essence but different signatures are considered as double spends by the consensus protocol and handled accordingly. While this does not pose a problem for the core protocol, it may be a problem for 2nd layer solutions, similar to how transaction malleability in bitcoin presented an issue for the lightning network.

Detailed design

In order to have consistent validation of Ed25519 signatures for all edge cases and throughout different implementations, this RFC proposes explicit validation criteria. These three criteria must be checked to evaluate whether a signature is valid.

Using the notation and Ed25519 parameters as described in the RFC 8032, the criteria are defined as follows:

  1. Accept non-canonical encodings of A and R.
  2. Reject values for S that are greater or equal than L.
  3. Use the equation [8][S]B = [8]R + [8][k]A' for validation.

In the following, we will explain each of these in more detail.

Decoding

The Curve25519 is defined over the finite field of order p=2255−19. A curve point (x,y) is encoded into its compressed 32-byte representation, namely by the 255-bit encoding of the field element y followed by a single sign bit that is 1 for negative x (see RFC 8032, Section 3.1) and 0 otherwise. This approach provides a unique encoding for each valid point. However, there are two classes of edge cases representing non-canonical encodings of valid points:

  • encoding a y-coordinate as y + p
  • encoding a curve point (0,y) with the sign bit set to 1

In contrast to RFC 8032, it is not required that the encodings of A and R are canonical. As long as the corresponding (x,y) is a valid curve point, any of such edge cases will be accepted.

Validation

The RFC 8032 mentions two alternative verification equations:

  1. [8][S]B = [8]R + [8][k]A'
  2. [S]B = R + [k]A'

Each honestly generated signature following RFC 8032 satisfies the second, cofactorless equation and thus, also the first equation. However, the opposite is not true. This is due to the fact that dishonestly generated nonce R and public key A' might have order other than L. Testing whether a point has order L is costly. The first, cofactored equation accepts more nonces and public keys including dishonestly generated ones but lets us skip costly order checks. This has the impact that each secret key has not one but eight corresponding public keys. However all those public keys correspond to different addresses.
There are solutions only satisfying the first equation but not the latter. This ambiguity in RFC 8032 has led to the current situation in which different implementations rely on different verification equations.

Ed25519 also supports batch signature verification, which allows verifying several signatures in a single step, much faster than verifying signatures one-by-one. Without going into detail, there are also two alternative verification equations for the batch verification:
[8][∑zᵢsᵢ] B = [8]∑[zᵢ]Rᵢ + [8]∑[zᵢhᵢ]Aᵢ and its corresponding cofactorless version. However, only cofactored verifications, single and batch, are compatible with each other. All other combinations are inconsistent and can lead to false positives or false negatives (see Chalkias et al. 2020, Section 3.2) for certain edge-cases introduced by an attacker.
Thus, in order to allow batch signature verification and its faster performance in IOTA nodes, the cofactored version must be used for validation, i.e. the group equation [8][S]B = [8]R + [8][k]A' for the single verification.

Since non-canonical encodings of A and R are allowed, it is crucial to also specify which representation must be used for the hash functions:

  • The provided binary encodings of A and R must be used as input to the hash function H instead of their canonical – and potentially different – representation.
  • During transaction validation, when the public key A is checked against the output's address, the provided binary encoding must be used for the BLAKE2b-256 hash instead of its canonical representation.

Malleability

The non-negative integer S is encoded into 32 bytes as part of the signature. However, a third party could replace S with S' = S + n·L for any natural n with S' < 2256 and the modified signature R || S' would still pass verification. Requiring a value less than L resolves this malleability issue. Unfortunately, this check is not present in all common Ed25519 implementations.

Analogous to RFC 8032, the encoding of S must represent an integer less than L.

It is not possible for an external party to mutate R and still pass verification. The owner of the secret key, however, can create many different signatures for the same content: While Ed25519 defines a deterministic method of calculating the integer scalar r from the private key and the message, it is impossible to tell during signature verification if the point R = [r]B was created properly or any other scalar has been used.
As a result, there is a practically countless amount of different valid signatures corresponding to a certain message and public key.

We allow users to have a zero-scalar secret key and consider eight corresponding public keys valid. However, users should not use it as it is equivalent to publishing one's secret key. This also has the impact that any valid signature produced with a zero-scalar secret key will authenticate any message thus making it "super"-malleable.

Test vectors

The test vectors are taken directly from Chalkias et al. 2020. Here, pub_key corresponds to the encoding of A and address is a 33-byte Ed25519 Address as described in TIP-7. The address is computed by hashing A. As mentioned in the paper, for test case #10 the key A is reduced before hashing, while in the others it is not. The key valid denotes whether the corresponding item represents a valid signature for the provided address and message or not.

Drawbacks

  • Allowing non-canonical encodings is a direct contradiction of RFC 8032 and rather unintuitive. Furthermore, it introduces alternative encodings for a handful of points on the curve. Even though such points will, for all practical purposes, never occur in honest signatures, it still theoretically introduces an external party malleability vector.
  • The cofactored validation is computationally slightly more expensive than the cofactorless version since it requires a multiplication by 8.

Rationale and alternatives

In the IOTA protocol, the Transaction ID corresponds to the hash over the entire transaction including the actual signature bytes. Therefore, it is absolutely crucial that (valid) signatures are not malleable by a public attacker, i.e. that the used Ed25519 variant is strongly-unforgeable. Allowing non-canonical point encodings does not introduce the same attack vector. As such, both options would lead to valid Ed25519 variants.

Unfortunately, the Ed25519 ref10 reference implementation as well as other implementations accept non-canonical points. As such, rejecting those inputs now would introduce a breaking change. While this might be acceptable for the IOTA protocol itself, since no Ed25519 signatures have been added to the ledger prior to this RFC, other consensus-critical applications require this backward compatibility with previously accepted signatures. Due to these considerations, the criterion was included in ZIP-215 to allow a seamless transition for existing consensus-critical contexts. This RFC aims to rather follow the existing ZIP-215 specification for compatibility and maintainability than to create a new standard.

Using the cofactorless validation poses a similar breaking change since signatures accepted by implementations using the cofactored validation would then be rejected. More importantly, however, in order to be able to use the much faster batch verification, the cofactored version is required.

Copyright

Copyright and related rights waived via CC0.

tip: 15
title: Dust Protection
description: Prevent bloating the ledger size with dust outputs
author: Gal Rogozinski (@GalRogozinski) 
discussions-to: https://github.com/iotaledger/tips/pull/32
status: Replaced
type: Standards
layer: Core
created: 2020-12-07
superseded-by: TIP-19

Summary

In the UTXO model, each node in the network needs to keep track of all the currently unspent outputs. When the number of outputs gets too large, this can cause performance and memory issues. This RFC proposes a new protocol rule regarding the processing of outputs that transfer a very small amount of IOTA, so-called dust outputs: Dust outputs are only allowed when they are backed up by a certain deposit on the receiving address. This limits the amount of dust outputs, thus making it expensive to proliferate dust. Since a receiver must make a deposit, the protocol makes receiving dust an opt-in feature.

Motivation

An attacker, or even honest users, can proliferate the UTXO ledger with outputs holding a tiny amount of IOTA coins. This can cause the ledger to grow to a prohibitively large size.

In order to protect nodes from such attacks, one possible solution is to make accumulating dust outputs expensive. Since IOTA does not have any fees that might limit the feasibility of issuing many dust transactions, deposits pose a valid alternative to achieve a similar effect.

When an address is supposed to receive micro transactions, it must have an unspent output of a special type as a deposit. This deposit cannot be consumed by any transaction as long as the dust outputs remain unspent.

An additional benefit of this rule is that it makes a mass of privacy violating forced address reuse attacks more expensive to carry out.

Detailed design

Definitions

Dust output: A transaction output that has an amount smaller than 1 Mi

SigLockedDustAllowanceOutput: A new output type for deposits that enables an address to receive dust outputs. It can be consumed as an input like a regular SigLockedSingleOutput.

Name Type Description
Output Type uint8 Set to value 1 to denote a SigLockedDustAllowanceOutput.
Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
Address ByteArray[32] The raw bytes of the Ed25519 address which is the BLAKE2b-256 hash of the public key.
Amount uint64 The amount of tokens to deposit with this SigLockedDustAllowanceOutput output.

Validation

Let A be the address that should hold the dust outputs' balances. Let S be the sum of all the amounts of all unspent SigLockedDustAllowanceOutputs on A. Then, the maximum number of allowed dust outputs on A is S divided by 100,000 and rounded down, i.e. 10 outputs for each 1 Mi deposited. However, regardless of S, the number of dust outputs must never exceed 100 per address.

The amount of a SigLockedDustAllowanceOutput must be at least 1 Mi. Apart from this, SigLockedDustAllowanceOutputs are processed identical to SigLockedSingleOutput. The transaction validation as defined in the IOTA protocol TIP-7, however, needs to be adapted.

Syntactical validation for SigLockedDustAllowanceOutput:

  • The Address must be unique in the set of SigLockedDustAllowanceOutputs in one transaction T. However, there can be one SigLockedSingleOutput and one SigLockedDustAllowanceOutputs T.
  • The Amount must be ≥ 1,000,000.

The semantic validation remains unchanged and are checked for both SigLockedSingleOutputs and SigLockedDustAllowanceOutput, but this RFC introduces one additional criterion:

A transaction T

  • consuming a SigLockedDustAllowanceOutput on address A or
  • creating a dust output with address A,

is only semantically valid, if, after T is booked, the number of confirmed unspent dust outputs on A does not exceed the allowed threshold of min(S / 100000, 100).

Drawbacks

  • There can no longer be addresses holding less than 1 Mi.
  • The actual validity of dust transaction can only be checked during semantic validation.
  • A service receiving micropayments may fail receiving them, if it did not consolidate dust outputs or raised the deposit for the receiving address.
  • An attacker can send microtransactions to an address with a SigLockedDustAllowanceOutput in order to fill the allowed threshold and block honest senders of microtransactions. The owner of the address can mitigate this by simply consolidating the attacker's dust and collecting it for profit.

Rationale and alternatives

The rationale for creating a special SigLockedDustAllowanceOutput rather than rely on the default SigLockedSingleOutputs is to prevent attackers from polluting arbitrary addresses that happen to hold a large amount of funds with dust.

One may note that an attacker can deposit a dust allowance on 3rd party address outside his control and pollute that address with dust. From a security perspective this is better than an attacker depositing a dust allowance on addresses under his control. This is because the receiving party might later choose to consolidate the dust outputs and hence relief UTXO memory consumption. The receiving party is also unlikely to be displeased from obtaining more funds, small as they may be.

There are potential alternatives to introducing dust allowance deposits:

  • Burning dust: Allow dust outputs to exists only for a limited amount of time in the ledger. After this, they are removed completely and the associated funds are invalidated.
  • Sweeping dust into Merkle trees: Instead of burning dust outputs after some time, they are instead compressed into a Merkle tree and only the tree root is kept. In order to spend one of these compressed outputs, the corresponding Merkle audit path needs to be supplied in addition to a regular signature.

The first option can cause issues, when dust outputs were burned before users could consolidate them. Also, changing the supply can be controversial.

The second option is much more complicated as it introduces a completely new unlock mechanisms and requires the nodes to store the Merkle tree roots indefinitely.

Copyright

Copyright and related rights waived via CC0.

tip: 16
title: Event API
description: Node event API definitions in AsyncAPI Specification
author: Luca Moser (@luca-moser) 
discussions-to: https://github.com/iotaledger/tips/pull/33
status: Replaced
type: Standards
layer: Interface
created: 2021-01-06
superseded-by: TIP-28

Summary

This document proposes the Event API for nodes supporting the IOTA protocol.

API

The API is described using the AsyncAPI Specification:

AsyncAPI Editor

Copyright

Copyright and related rights waived via CC0.

tip: 17
title: Wotsicide
description: Define migration from legacy WOTS addresses to post-Chrysalis Phase 2 network
author: Luca Moser (@luca-moser) 
discussions-to: https://github.com/iotaledger/tips/pull/35
status: Obsolete
type: Standards
layer: Core
created: 2021-01-13

Summary

This RFC defines the migration process of funds residing on WOTS addresses in the current legacy network to the post-Chrysalis Phase 2 network.

Motivation

The IOTA protocol wants to move away from WOTS as it created a number of security, protocol and UX issues:

  • WOTS signatures are big and make up a disproportional amount of data of a transaction.
  • It is only safe to spend from an address once. Spending multiple times from the same address reveals random parts of the private key, making any subsequent transfers (other than the first) susceptible to thefts.
  • As a prevention mechanism to stop users from spending multiple times from the same address, nodes have to keep an ever growing list of those addresses.

In the beginning of the new Chrysalis Phase 2 network, only Ed25519 addresses are supported. The protocol will no longer support WOTS addresses. Therefore, there needs to be a migration process from WOTS addresses to Ed25519 address in the new network.

To make the migration as smooth as possible, the specified mechanism allows for users to migrate their funds at any time with only a small delay until they're available on the new network.

This RFC outlines the detailed architecture of how users will be able to migrate their funds and specifies the underlying components and their purposes.

Detailed Design

On a high-level the migration process works as follows:

  • Users create migration bundles in the legacy network which target their Ed25519 address in the new network.
  • The Coordinator then mints those migrated funds in receipts which are placed within milestones on the new network.
  • Nodes in the new network evaluate receipts and book the corresponding funds by creating new UTXOs in the ledger.

Migration timeline

  1. Users issue migration bundles which effectively burn their funds. During this period, normal value bundles and zero-value transactions are allowed to become part of a milestone cone.
  2. The Coordinator is stopped, and a new global snapshot is created which as its only solid entry point contains the last issued milestone (and put on dbfiles.iota.org). This global snapshot is used to create the genesis snapshot containing the already migrated funds for the new network. The remainder of the total supply which has not been migrated is allocated on the TreasuryOutput. Users are instructed to check the validity of these two snapshots.
  3. A new Hornet version is released which only allows migration bundles to be broadcasted or be part of milestone cones. Users must update their node software as otherwise they will no longer peer.
  4. The legacy network is restarted with the global snapshot, and the new network bootstraps with the genesis snapshot.
  5. Further funds migrated in the legacy network are transferred to the new network using the receipt mechanism.

Changes to the legacy network

In order to facilitate the migration process, the node software making up the legacy network needs to be updated. This update will be deployed by stopping the Coordinator and forcing all nodes to upgrade to this new version.

Migration bundle

The node software will no longer book ledger mutations to non-migration addresses. This means that users are incentivized to migrate their funds as they want to use their tokens. See this document on what migration addresses are.

A migration bundle is defined as follows:

  • It contains exactly one output transaction of which the destination address is a valid migration address and is positioned as the tail transaction within the bundle. The output transaction value is at least 1'000'000 tokens.
  • It does not contain any zero-value transactions which do not hold signature fragments. This means that transactions other than the tail transaction must always be part of an input.
  • Input transactions must not use migration addresses.

The node will only use tail transactions of migration or milestone bundles for the tip-pool. This means that past cones referenced by a milestone will only include such bundles.

The legacy node software is updated with an additional HTTP API command called getWhiteFlagConfirmation which given request data in the following form:

{
    "command": "getWhiteFlagConfirmation",
    "milestoneIndex": 1434593
}

returns data for the given milestone white-flag confirmation:

{
    "milestoneBundle": [
        "SDGKWKJAG...",
        "WNGHJWIFA...",
        "DSIEWSDIG..."
    ],
    "includedBundles": [
        [
            "SKRGI9DFS...",
            "NBJSKRJGW...",
            "ITRUQORTZ..."
        ],
        [
            "OTIDFJKSD...",
            "BNSUGRWER...",
            "OPRGJSDFJ..."
        ],
        ...
    ]
}

where milestoneBundle contains the milestone bundle trytes and includedBundles is an array of tryte arrays of included bundles in the same DFS order as the white-flag confirmation. Trytes within a bundle "array" are sorted from currentIndex = 0 ascending to the lastIndex.

This HTTP API command allows interested parties to verify which migration bundles were confirmed by a given milestone.

Milestone inclusion Merkle proof

The Coordinator will only include migration bundles (respectively the tails of those bundles) in its inclusion Merkle proof. Nodes which do not run with the updated code will crash once the updated confirmation is in place.

Preventing non-migration bundles

As an additional measure to prevent users from submitting never confirming non-migration bundles (which would lead to key-reuse), nodes will no longer accept non-migration bundles in the HTTP API.

HTTP API level checks:

  • The user must submit an entire migration bundle. No more single zero-value transactions, value-spam bundles etc. are allowed.
  • Input transactions are spending the entirety of the funds residing on the corresponding address. There must be more than 0 tokens on the given address.

Wallet software must be updated to no longer support non-migration bundles.

There are no restrictions put in place on the gossip level, as it is too complex to prevent non-migration transactions to be filtered out, however, these transactions will never become part of a milestone cone.

Treasury Transaction

A TreasuryTransaction is a payload which contains a reference to the current TreasuryOutput (in form of a TreasuryInput object) and an output TreasuryOutput which deposits the remainder.

Serialized form:

Name Type Description
Type uint32 Set to value 4 to denote a TreasuryTransaction.
Input TreasuryInput
Name Type Description
Input Type byte Set to value 1 to denote an TreasuryInput.
Milestone Hash Array<byte>[32]> The hash of the milestone which created the referenced TreasuryOutput.
Output TreasuryOutput
Name Type Description
Output Type byte Set to value 2 to denote an TreasuryOutput.
Amount uint64 The amount of funds residing in the treasury.

Treasury Input

The TreasuryInput is equivalent to a normal UTXOInput but instead of referencing a transaction, it references a milestone. This input can only be used within TreasuryTransaction payloads.

Serialized form:

Name Type Description
Input Type byte Set to value 1 to denote an TreasuryInput.
Milestone Hash Array<byte>[32]> The hash of the milestone which created the referenced TreasuryOutput.

Treasury Output

The TreasuryOutput is a special output type which represents the treasury of the network, respectively the non yet migrated funds. At any given moment in time, there is only one TreasuryOutput.

Serialized form:

Name Type Description
Output Type byte Set to value 2 to denote an TreasuryInput.
Amount uint64 The amount of funds residing in the treasury.

The TreasuryOutput can not be referenced or spent by transactions, it can only be referenced by receipts.

The TreasuryOutput can be queried from the HTTP API and needs to be included within snapshots in order to keep the total supply intact.

Receipts

Receipts allow for fast migration of funds from the legacy into the new network by representing entries of funds which were migrated in the old network.

Schema

Receipts are listings of funds for which nodes must generate UTXOs in the form of SigLockedSingleOutputs targeting the given address. Receipts are embedded within milestone payloads and therefore signed by the Coordinator. A milestone may contain up to one receipt as a payload. The Coordinator chooses whether to embed a receipt payload or not.

Serialized form:

Name Type Description
Payload Type uint32 Set to value 3 to denote a Receipt.
Migrated At uint32 The index of the legacy milestone in which the listed funds were migrated at.
Final bool Flags whether this receipt is the last receipt for the given Migrated At index.
Funds Count uint16 Denotes how many migrated fund entries are within the receipt.
Funds
Migrated Funds Entry
Name Type Description
Tail Transaction Hash Array<byte>[49] The tail transaction hash of the bundle in which these funds were migrated.
Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
Address Array<byte>[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Amount uint64 The amount which was migrated
Payload Length uint32 The length in bytes of the payload.
Payload
TreasuryTransaction Payload

Validation

Syntactical

  • funds_array_count can be max 127 and must be > 0.
  • funds array must be in lexical sort order (by their serialized form).
  • any tail_transaction_hash must be unique over the entire receipt.
  • deposit must be ≥ 1'000'000 IOTA tokens.
  • payload_length can not be zero.
  • treasury_transaction must be a Treasury Transaction payload.

Semantic

  • migrated_at can not decrease between subsequent receipts.
  • There must not be any subsequent receipts with the same migrated_at index after the one with the final flag set to true.
  • The amount field of the previous TreasuryOutput minus the sum of all the newly migrated funds must equal the amount of the new TreasuryOutput within the TreasuryTransaction payload.

Legitimacy of migrated funds

While the syntactical and semantic validation ensures that the receipt's integrity is correct, it doesn't actually tell whether the given funds were really migrated in the legacy network.

In order validate this criteria, the node software performs following operation:

  1. The HTTP API of a legacy node is queried for all the tail_transaction_hashes, the addresses and their corresponding migrated funds.
  2. The node checks whether the funds within the receipt matches the response from the legacy node.
  3. Additionally, if the receipt's final flag was set to true, it is validated whether all funds for the given legacy milestone were migrated by looking at all the receipts with the same migrated_at index.

If the operation fails, the node software must gracefully terminate with an appropriate error message.

In an optimal setting, node operators choose to only ask their own deployed nodes in the legacy network.

Booking receipts

After successful receipt validation, the node software generates UTXOs in the following form: A SigLockedSingleOutput is allocated with the given ed25519_address and the funds as the deposit. As there is no actual transaction which generates the UTXO and normally a UTXO ID consists of transaction hash | the output index, the milestone hash of the milestone which included the receipt with the given funds is used as the transaction hash. The output index equals the index of the funds within the receipt (this is also why the receipt is limited to 127 entries). This allows to easily look up in which milestone these funds were generated in.

If one wants to audit the UTXO path of an input, it means that milestones need to be kept forever as they're needed to recognize that a certain output was generated by it. However, this can be offloaded to a 2nd level service.

All the generated SigLockedSingleOutputs from the migrated funds are then booked into the ledger and the new TreasuryOutput is persisted as a UTXO using the milestone hash of the receipt which included the Treasury Transaction payload.

Transparency

For transparency reasons, the IF offers software which renders a dashboard showing details throughout the entire migration process:

  • A list of outstanding funds residing on migration addresses to be migrated with the milestone index at which they were created.
  • Migrated funds.
  • Generated receipts.

Misc

At current legacy network ledger size of 261446 entries (addresses with ≥ 1'000'000 tokens), it would take min. ~2058 receipts to migrate all funds. While theoretically the max message size allows for more entries to be within one receipt, it is limited by the fact that the index of the migrated address within the receipt is used to generate the output_index of the generated SigLockedSingleDeposit (further explained below).

Assuming the best case scenario in which all 261446 entries were sent to migration addresses in the legacy network, these funds could therefore be migrated into the new network within ~5.7h (at a 10 second milestone interval). Of course, in practice users will migrate over time and the receipt mechanism will need to be in place as long as the new network runs.

If looked at the receipt validation from a higher-level, it becomes quite apparent that this is analogous to previous global snapshots where users would post comments on a PR on GitHub saying that they computed the same ledger state, just that it is more granular and automatic while still leveraging the same source of truth: the ledger state/database of nodes.

Caveats

  • The local snapshot file format needs to also include the to be applied receipts and supply information.

Copyright

Copyright and related rights waived via CC0.

tip: 18
title: Multi-Asset Ledger and ISC Support
description: Transform IOTA into a multi-asset ledger that supports running IOTA Smart Contracts
author: Levente Pap (@lzpap) 
discussions-to: https://github.com/iotaledger/tips/pull/38
status: Active
type: Standards
layer: Core
created: 2021-11-04
requires: TIP-19, TIP-20, TIP-21 and TIP-22

Summary

This document proposes new output types and transaction validation rules for the IOTA protocol to support native tokenization and smart contract features.

Native tokenization refers to the capability of the IOTA ledger to track the ownership and transfer of user defined tokens, so-called native tokens, thus making it a multi-asset ledger. The scalable and feeless nature of IOTA makes it a prime candidate for tokenization use cases.

The IOTA Smart Contract Protocol (ISCP) is a layer 2 extension of the IOTA protocol that adds smart contract features to the Tangle. Many so-called smart contract chains, which anchor their state to the base ledger, can be run in parallel. Users wishing to interact with smart contract chains can send requests to layer 1 chain accounts either as regular transactions or directly to the chain, but chains may also interact with other chains in a trustless manner through the Tangle.

This TIP presents output types that realize the required new features:

  • Smart contract chains have a new account type, called alias account, represented by an alias output.
  • Requests to smart contract chains can be carried out using the configurable new output type called basic output.
  • Native tokens have their own supply control policy enforced by foundry outputs.
  • Layer 1 native non-fungible tokens (unique tokens with attached metadata) are introduced via NFT outputs.

Motivation

IOTA transitioned from an account based ledger model to an unspent transaction output (UTXO) model with the upgrade to Chrysalis phase 2. In this model, transactions explicitly reference funds produced by previous transactions to be consumed. This property is desired for scalability: transaction validation does not depend on the shared global state and, as such, transactions can be validated in parallel. Double-spends can easily be detected as they spend the very same output more than once.

The UTXO model becomes even more powerful when unlocking criteria (validation) of outputs is extended as demonstrated by the EUTXO model (Chakravarty et al., 2020): instead of requiring only a valid signature for the output's address to unlock it, additional unlocking conditions can be programmed into outputs. This programmability of outputs is the main idea behind the new output types presented in this document.

Today, outputs in the IOTA protocol are designed for one specific use case: the single asset cryptocurrency. The aim of this TIP is to design several output types for the use cases of:

  • Native Tokenization Framework,
  • ISCP style smart contracts,
  • seamless interoperability between layer 1 and layer 2 tokenization concepts.

Users will be able to mint their own native tokens directly in the base ledger, which can then be transferred without any fees just like regular IOTA coins. Each native token has its own supply control policy enforced by the protocol. These policies are transparent to all network participants. Issuers will be able to store metadata about their tokens on-ledger, accessible to anyone.

Non-fungible tokens can be minted and transferred with zero fees. The validated issuers of such NFTs are immutably attached to the tokens, making it impossible to counterfeit them.

Users will be able to interact with smart contracts by posting requests through the Tangle. Requests can carry commands to smart contracts and can additionally also transfer native tokens and NFTs. By depositing native tokens to smart contracts, their features can be greatly enhanced and programmed to specific use cases.

The proposal in this TIP not only makes it possible to transfer native tokens to layer 2 smart contracts, but tokens that originate from layer 2 smart contract chains can also be wrapped into their respective layer 1 representation. Smart contract chains may transfer tokens between themselves through this mechanism, and they can also post requests to other chains.

Composability of smart contracts extends the realm of one smart contract chain, as smart contracts residing on different chains can call each other in a trustless manner.

In conclusion, the IOTA protocol will become a scalable general purpose multi-asset DLT with the addition of smart contracts and native tokenization frameworks. The transition is motivated by the ever-growing need for a scalable and affordable decentralized application platform.

Detailed Design

Outputs in the UTXO model are essential, core parts of the protocol. The new output types introduce new validation and unlocking mechanisms, therefore the protocol needs to be adapted. The structure of the remaining sections is as follows:

  1. Introduction to ledger programmability
  2. Data types, subschemas and protocol constants
  3. Transaction Payload changes compared to Chrysalis Part 2
  4. New concepts of output design
  5. Detailed design of new output types
  6. New unlocking mechanisms
  7. Discussion

Ledger Programmability

The current UTXO model only provides support to transfer IOTA coins. However, the UTXO model presents a unique opportunity to extend the range of possible applications by programming outputs.

Programming the base ledger of a DLT is not a new concept. Bitcoin uses the UTXO model and attaches small executables (scripts) that need to be executed during transaction validation. The bitcoin script language is however not Turing-complete as it can only support a small set of instructions that are executed in a stack based environment. As each validator has to execute the same scripts and arrive at the same conclusion, such scripts must terminate very quickly. Also, as transaction validation happens in the context of the transaction and block, the scripts have no access to the global shared state of the system (all unspent transaction outputs).

The novelty of Ethereum was to achieve quasi Turing-completeness by employing an account based model and gas to limit resource usage during program execution. As the amount of gas used per block is limited, only quasi Turing-completeness can be achieved. The account based model of Ethereum makes it possible for transactions to have access to the global shared state of the system, furthermore, transactions are executed one-after-the-other. These two properties make Ethereum less scalable and susceptible to high transaction fees.

Cardano achieves UTXO programmability by using the EUTXO model. This makes it possible to represent smart contracts in a UTXO model as state machines. In EUTXO, states of the machine are encoded in outputs, while state transition rules are governed by scripts. Just like in bitcoin, these scripts may only use a limited set of instructions.

It would be quite straightforward to support EUTXO in IOTA too, except that IOTA transactions are feeless. There is no reward to be paid out to validators for validating transactions, as all nodes in the network validate all transactions. Due to the unique data structure of the Tangle, there is no need for miners to explicitly choose which transactions are included in the ledger, but there still has to be a notion of objective validity of transactions. Since it is not possible without fees to penalize scripts that consume excessive network resources (node CPU cycles) during transaction validation, IOTA has to be overly restrictive about what instructions are supported on layer 1.

It must also be noted that UTXO scripts are finite state machines with the state space restricted by the output and transaction validation rules. It makes expressiveness of UTXO scripts inherently limited. In the context of complicated application logic required by use cases such as modern DeFi, this leads to unconventional and complicated architectures of the application, consisting of many interacting finite state machines. Apart from complexity and UX costs, it also has performance and scalability penalties.

For the reason mentioned above, IOTA chooses to support configurable yet hard-coded scripts for output and transaction validation on layer 1. The general full-scale quasi Turing-complete programmability of the IOTA ledger is achieved by extending the ledger state transition function with layer 2 smart contract chains. This not only makes it possible to keep layer 1 scalable and feeless, but also allows to support any type of virtual machine on layer 2 to program advanced business logic and features.

Below, several new output types are discussed that implement their own configurable script logic. They can be viewed as UTXO state machines in which the state of the machine is encoded as data inside the output. The state transition rules are defined by the output type and by the parameters chosen upon deployment.

Data Types & Subschema Notation

Data types and subschemas used throughout this TIP are defined in TIP-21.

Global Protocol Parameters

Global protocol parameters used throughout this TIP are defined in TIP-22 (IOTA) and TIP-32 (Shimmer).

Transaction Payload Changes

The new output types and unlocking mechanisms require new transaction validation rules, furthermore some protocol rules have been modified compared to Chrysalis Part 2 Transaction Payload TIP-7.

TIP-20 replaces aforementioned TIP-7 with the new transaction layout and validation rules. The updated version is the basis for output validation in this TIP.

Summary of Changes

  • Deprecating SigLockedSingleOutput and SigLockedDustAllowanceOutput.
    • The new dust protection mechanism does not need a distinct output type, therefore SigLockedDustAllowanceOutput will be deprecated. One alternative is that during migration to the new protocol version, all dust outputs sitting on an address will be merged into a Basic Output together with their respective SigLockedDustAllowanceOutputs to create the snapshot for the updated protocol. The exact migration strategy will be decided later.
  • Adding new output types to Transaction Payload.
  • Adding new unlock types to Transaction Payload.
  • Inputs and Outputs of a transaction become a list instead of a set. Binary duplicate inputs are not allowed as they anyway mean double-spends, but binary duplicate outputs are allowed.
  • There can be many outputs created to the same address in the transaction.
  • Confirming milestone supplies notion of time to semantic transaction validation.

New Concepts

New output types add new features to the protocol and hence new transaction validation rules. While some of these new features are specifically tied to one output type, some are general, LEGO like building blocks that may be put in several types of outputs.

Below is a summary of such new features and the validation rules they introduce.

Native Tokens in Outputs

Outputs are records in the UTXO ledger that track ownership of funds. Thus, each output must be able to specify which funds it holds. With the addition of the Native Tokenization Framework, outputs may also carry user defined native tokens, that is, tokens that are not IOTA coins but were minted by foundries and are tracked in the very same ledger. Therefore, every output must be able to hold not only IOTA coins, but also native tokens.

Dust protection applies to all outputs, therefore it is not possible for outputs to hold only native tokens, the storage deposit requirements must be covered via IOTA coins.

User defined tokens are called Native Tokens on protocol level. The maximum supply of a particular native token is defined by the representation chosen on protocol level for defining their amounts in outputs. Since native tokens are also a vehicle to wrap layer 2 tokens into layer 1 tokens, the chosen representation must take into account the maximum possible supply of layer 2 tokens. Solidity, the most popular smart contract language defines the maximum supply of an ERC-20 token as MaxUint256, therefore it should be possible to represent such huge amount of assets on layer 1.

Outputs must have the following fields to define the balance of native tokens they hold:

Name Type Description
Native Tokens Count uint8 The number of native tokens present in the output.
Native Tokens optAnyOf
Native Token
Name Type Description
Token ID ByteArray[38] Identifier of the native token. Derivation defined here.
Amount uint256 Amount of tokens.

Additional syntactic output validation rules:

  • Native Tokens must be lexicographically sorted based on Token ID.
  • Each Native Token must be unique in the set of Native Tokens based on its Token ID. No duplicates are allowed.
  • Amount of any Native Token must not be 0.

Additional semantic transaction validation rules:

  • The transaction is balanced in terms of native tokens, that is, the sum of native token balances in consumed outputs equals that of the created outputs.
  • When the transaction is imbalanced and there is a surplus of native tokens on the:
    • output side of the transaction: the foundry outputs controlling outstanding native token balances must be present in the transaction. The validation of the foundry output(s) determines if the minting operations are valid.
    • input side of the transaction: the transaction destroys tokens. The presence and validation of the foundry outputs of the native tokens determines whether the tokens are burned (removed from the ledger) or melted in the foundry.

New Functionalities in Outputs

The programmability of outputs opens the door for implementing new functionalities for the base protocol. While some outputs were specifically designed for such new features, some are optional additions that may be used with any outputs that support them.

These new functionalities are grouped into two categories:

  • Unlock Conditions and
  • simple Features.

The Output Design section lists all supported Unlock Conditions and Features for each output type.

Unlock Conditions

New output features that introduce unlocking conditions, that is, they define constraints on how the output can be unlocked and spent, are grouped under the field Unlock Conditions.

Each output must not contain more than one unlock condition of each type and not all unlock condition types are supported for each output type.

Address Unlock Condition

It is merely a layout change that the previously defined Address field of outputs (TIP-7) is represented as an Address Unlock Condition. Unlocking an Ed25519 Address doesn't change, it has to be performed via a Signature Unlock in a transaction by signing the hash of the transaction essence. Transaction validation rules are detailed in TIP-20.

New additions are the Alias Address and NFT Address types, which have to be unlocked with their corresponding unlocks, as defined in Unlocking Chain Script Locked Outputs.

Address Unlock
Defines the Address that owns this output, that is, it can unlock it with the proper Unlock in a transaction.
Name Type Description
Unlock Condition Type uint8 Set to value 0 to denote an Address Unlock Condition.
Address
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
:information_source: Good to know about address format

The Address Type byte of a raw address has an effect on the starting character of the bech32 encoded address, which is the recommended address format for user facing applications.

A usual bech32 encoded mainnet address starts with iota1, and continues with the bech32 encoded bytes of the address. By choosing Address Type as a multiple of 8 for different address types, the first character after the 1 separator in the bech32 address will always be different.

AddressType Byte as uint8Bech32 Encoded
Ed255190iota1q...
Alias8iota1p...
NFT16iota1z...

A user can identify by looking at the address whether it is a signature backed address, a smart contract chain account or an NFT address.

Storage Deposit Return Unlock Condition

This unlock condition is employed to achieve conditional sending. An output that has Storage Deposit Return Unlock Condition specified can only be consumed in a transaction that deposits Return Amount IOTA coins into Return Address. When several of such outputs are consumed, their return amounts per Return Addresses are summed up and the output side must deposit this total sum per Return Address.

Additional syntactic transaction validation rule:
  • Minimum Storage Deposit is the storage deposit in the base currency required for a Basic Output that only has an Address Unlock Condition, no additional unlock conditions, no features and no native tokens.
  • It must hold true, that Minimum Storage DepositReturn AmountAmount.
Additional semantic transaction validation rule:
  • An output that has Storage Deposit Return Unlock Condition specified must only be consumed and unlocked in a transaction that deposits Return Amount IOTA coins to Return Address via one or more outputs that:
  • When several outputs with Storage Deposit Return Unlock Condition and the same Return Address are consumed, their return amounts per Return Addresses are summed up and the output side of the transaction must deposit at least this total sum per Return Address via output(s) that satisfy the previous condition.
Storage Deposit Return Unlock Condition
Defines the amount of IOTAs used as storage deposit that have to be returned to Return Address.
Name Type Description
Unlock Condition Type uint8 Set to value 1 to denote a Storage Deposit Return Unlock Condition.
Return AddressoneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Return Amount uint64 Amount of IOTA coins the consuming transaction should deposit to Return Address.

This unlock condition makes it possible to send small amounts of IOTA coins or native tokens to addresses without having to lose control of the required storage deposit. It is also a vehicle to send on-chain requests to ISCP chains that do not require fees. To prevent the receiving party from blocking access to the storage deposit, it is advised to be used together with the Expiration Unlock Conditions. The receiving party then has a sender-defined time window to agree to the transfer by consuming the output, or the sender regains total control after expiration.

Timelock Unlock Condition

The notion of time in the Tangle is introduced via milestones. Each milestone carries the current unix timestamp corresponding to that milestone index. Whenever a new milestone appears, nodes perform the white-flag ordering and transaction validation on its past cone. The timestamp of the confirming milestone provide the time as an input parameter to transaction validation.

An output that contains a Timelock Unlock Condition can not be unlocked before the specified timelock has expired. The timelock is expired when the timestamp of the confirming milestone is equal or past the timestamp defined in the Timelock Unlock Condition.

Additional syntactic transaction validation rules:
  • Unix Time field of a Timelock Unlock Condition must be > 0.
Additional semantic transaction validation rules:
  • An output that has Timelock Unlock Condition specified must only be consumed and unlocked in a transaction, if the timestamp of the confirming milestone is equal or past the Unix Time specified in the unlock condition.
Timelock Unlock Condition
Defines a unix timestamp until which the output can not be unlocked.
Name Type Description
Unlock Condition Type uint8 Set to value 2 to denote a Timelock Unlock Condition.
Unix Time uint32 Unix time (seconds since Unix epoch) starting from which the output can be consumed.
Expiration Unlock Condition

The expiration feature of outputs makes it possible for the return address to reclaim an output after a given expiration time has been passed. The expiration can be specified as a unix timestamp.

The expiration feature can be viewed as an opt-in receive feature, because the recipient loses access to the received funds after the output expires, while the return address specified by the sender regains control over them. This feature is a big help for on-chain smart contract requests. Those that have expiration set and are sent to dormant smart contract chains can be recovered by their senders. Not to mention the possibility to time requests by specifying both a timelock and an expiration unlock condition.

Additional syntactic transaction validation rules:
  • Unix Time field of an Expiration Unlock Condition must be > 0.
Additional semantic transaction validation rules:
  • An output that has Expiration Unlock Condition set must only be consumed and unlocked by the target Address (defined in Address Unlock Condition) in a transaction that has a confirming milestone timestamp earlier than the Unix Time defined in the unlock condition.
  • An output that has Expiration Unlock Condition set must only be consumed and unlocked by the Return Address in a transaction that has a confirming milestone timestamp same or later than the Unix Time defined in the unlock condition.
  • Semantic validation of an output that has Expiration Unlock Condition set and is unlocked by the Return Address must ignore:

The following table summarizes the outcome of syntactic and semantic validation rules with respect to which account is allowed to unlock the output containing the Expiration Unlock Condition:

Milestone Unix Timestamp ConditionOutcome
Unix Time = 0Output and containing transaction is invalid.
Unix Time > Confirming Milestone Unix TimestampUnlockable by Address
Unix TimeConfirming Milestone Unix TimestampUnlockable by Return Address
Expiration Unlock Condition
Defines a unix time until which only Address, defined in Address Unlock Condition, is allowed to unlock the output. After the unix time is reached/passed, only Return Address can unlock it.
Name Type Description
Unlock Condition Type uint8 Set to value 3 to denote a Expiration Unlock Condition.
Return Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Unix Time uint32 Before this unix time, Address Unlock Condition is allowed to unlock the output, after that only the address defined in Return Address.
State Controller Address Unlock Condition

An unlock condition defined solely for Alias Output. It is functionally equivalent to an Address Unlock Condition, however there are additional transition constraints defined for the Alias UTXO state machine that can only be carried out by the State Controller Address, hence the distinct unlock condition type.

State Controller Address Unlock
Defines the State Controller Address that owns this output, that is, it can unlock it with the proper Unlock in a transaction that state transitions the alias output.
Name Type Description
Unlock Condition Type uint8 Set to value 4 to denote an State Controller Address Unlock Condition.
Address
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.

The additional constraints are defined in Alias Output Design section.

Governor Address Unlock Condition

An unlock condition defined solely for Alias Output. It is functionally equivalent to an Address Unlock Condition, however there are additional transition constraints defined for the Alias UTXO state machine that can only be carried out by the Governor Address, hence the distinct unlock condition type.

Governor Address Unlock
Defines the Governor Address that owns this output, that is, it can unlock it with the proper Unlock in a transaction that governance transitions the alias output.
Name Type Description
Unlock Condition Type uint8 Set to value 5 to denote an Governor Address Unlock Condition.
Address
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.

The additional constraints are defined in Alias Output Design section.

Immutable Alias Address Unlock Condition

An unlock condition defined for chain constrained UTXOs that can only be unlocked by a permanent Alias Address.

Output unlocking is functionally equivalent to an Address Unlock Condition with an Alias Address, however there are additional transition constraints: the next state of the UTXO machine must have the same Immutable Alias Address Unlock Condition.

Immutable Alias Address Unlock Condition
Defines the permanent Alias Address that owns this output.
Name Type Description
Unlock Condition Type uint8 Set to value 6 to denote an Immutable Alias Address Unlock Condition.
Address
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
Additional semantic transaction validation rules:
  • The output must be unlocked with an Alias Unlock.
  • The next state of the UTXO state machine must have the same Immutable Alias Address Unlock Condition defined.

Features

New output features that do not introduce unlocking conditions, but rather add new functionality and add constraints on output creation are grouped under Features.

Each output must not contain more than one feature of each type and not all feature types are supported for each output type.

Sender Feature

Every transaction consumes several elements from the UTXO set and creates new outputs. However, certain applications (smart contracts) require to associate each output with exactly one sender address. Here, the sender feature is used to specify the validated sender of an output.

Outputs that support the Sender Feature may specify a Sender address which is validated by the protocol during transaction validation.

Additional semantic transaction validation rule:
  • The Sender Feature, and hence the output and transaction that contain it, is valid, if and only if Sender address is unlocked in the transaction. Based on the Address Type, an address is unlocked in the transaction, if and only if:
    • Ed25519 Address:
      • The Unlock of the first output in the transaction that contains the address is a valid Signature Unlock with respect to the address.
    • Alias Address:
      • The Alias Output that defines the address is state transitioned in the transaction. A governance transition does not unlock the address.
    • NFT Address:
      • The NFT Output that defines the address is consumed as input in the transaction.
Sender Feature
Identifies the validated sender of the output.
Name Type Description
Feature Type uint8 Set to value 0 to denote a Sender Feature.
Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Issuer Feature

The issuer feature is a special case of the sender feature that is only supported by outputs that implement a UTXO state machine with chain constraint (alias, NFT). Only when the state machine is created (e.g. minted) it is checked during transaction validation that an output corresponding to the Issuer address is consumed. In every future transition of the state machine, it is instead checked that the issuer feature is still present and unchanged.

Additional semantic transaction validation rule:
  • When an Issuer Feature is present in an output representing the initial state of an UTXO state machine, the transaction that contains this output is valid, if and only if Issuer address is unlocked in the transaction. Based on the Address Type, an address is unlocked in the transaction, if and only if:
    • Ed25519 Address:
      • The Unlock of the first output in the transaction that contains the address is a valid Signature Unlock with respect to the address.
    • Alias Address:
      • The Alias Output that defines the address is state transitioned in the transaction. A governance transition does not unlock the address.
    • NFT Address:
      • The NFT Output that defines the address is consumed as input in the transaction.

The main use case is proving authenticity of NFTs. Whenever an NFT is minted as an NFT output, the creator (issuer) can fill the Issuer Feature with their address that they have to unlock in the transaction. Issuers then can publicly disclose their addresses to prove the authenticity of the NFT once it is in circulation.

Issuer Feature
Identifies the validated issuer of the UTXO state machine.
Name Type Description
Feature Type uint8 Set to value 1 to denote an Issuer Feature.
Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.

Whenever a chain account mints an NFT on layer 1 on behalf of some user, the Issuer field can only contain the chain's address, since user does not sign the layer 1 transaction. As a consequence, artist would have to mint NFTs themselves on layer 1 and then deposit it to chains if they want to place their own address in the Issuer field.

Metadata Feature

Outputs may carry additional data with them that is interpreted by higher layer applications built on the Tangle. The protocol treats this metadata as pure binary data, it has no effect on the validity of an output except that it increases the required storage deposit. ISC is a great example of a higher layer protocol that makes use of Metadata Feature: smart contract request parameters are encoded in the metadata field of outputs.

Additional syntactic transaction validation rules:
  • An output with Metadata Feature is valid, if and only if 0 < length(Data)Max Metadata Length.
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Type Description
Feature Type uint8 Set to value 2 to denote a Metadata Feature.
Data (uint16)ByteArray Binary data. A leading uint16 denotes its length.

Tag Feature

A Tag Feature makes it possible to tag outputs with an index, so they can be retrieved through an indexer API not only by their address, but also based on the Tag. The combination of a Tag Feature, a Metadata Feature and a Sender Feature makes it possible to retrieve data associated to an address and stored in outputs that were created by a specific party (Sender) for a specific purpose (Tag).

An example use case is voting on the Tangle via the participation plugin.

Additional syntactic transaction validation rules:
  • An output with Tag Feature is valid, if and only if 0 < length(Tag)Max Tag Length.
Tag Feature
Defines an indexation tag to which the output can be indexed by additional node plugins.
Name Type Description
Feature Type uint8 Set to value 3 to denote a Tag Feature.
Tag (uint8)ByteArray Binary indexation tag. A leading uint8 denotes its length.

Chain Constraint in UTXO

Previously created transaction outputs are destroyed when they are consumed in a subsequent transaction as an input. The chain constraint makes it possible to carry the UTXO state machine state encoded in outputs across transactions. When an output with chain constraint is consumed, that transaction has to create a single subsequent output that carries the state forward. The state can be updated according to the transition rules defined for the given type of output and its current state. As a consequence, each such output has a unique successor, and together they form a path or chain in the graph induced by the UTXO spends. Each chain is identified by its globally unique identifier.

Alias outputs, foundry outputs and NFT outputs all use this chain constraint concept and define their own unique identifiers.

Output Design

In the following, we define four new output types. They are all designed with specific use cases in mind:

  • Basic Output: transfer of funds with attached metadata and optional spending restrictions. Main use cases are on-ledger ISC requests, native asset transfers and indexed data storage in the UTXO ledger.
  • Alias Output: representing ISC chain accounts on L1 that can process requests and transfer funds.
  • Foundry Output: supply control of user defined native tokens. A vehicle for cross-chain asset transfers and asset wrapping.
  • NFT Output: an output that represents a Non-fungible token with attached metadata and proof-of-origin. A NFT is represented as an output so that the token and metadata are transferred together, for example as a smart contract requests. NFTs are possible to implement with native tokens as well, but then ownership of the token does not mean ownership of the foundry that holds its metadata.

The validation of outputs is part of the transaction validation process. There are two levels of validation for transactions: syntactic and semantic validation. The former validates the structure of the transaction (and outputs), while the latter validates whether protocol rules are respected in the semantic context of the transaction. Outputs hence are validated on both levels:

  1. Transaction Syntactic Validation: validates the structure of each output created by the transaction.
  2. Transaction Semantic Validation:
    • For consumed outputs: validates whether the output can be unlocked in a transaction given the semantic transaction context.
    • For created outputs: validates whether the output can be created in a transaction given the semantic transaction context.

Each new output type may add its own validation rules which become part of the transaction validation rules if the output is placed inside a transaction. Unlock Conditions and Features described previously also add constraints to transaction validation when they are placed in outputs.

Basic Output

Basic Output can hold native tokens and might have several unlock conditions and optional features. The combination of several features provide the base functionality for the output to be used as an on-ledger smart contract request:

  • Verified Sender,
  • Attached Metadata that can encode the request payload for layer 2,
  • Return Amount to get back the storage deposit,
  • Timelock to be able to time requests,
  • Expiration to recover funds in case of chain inactivity.

Besides, the Tag Feature is a tool to store arbitrary, indexed data with verified origin in the ledger.

Note, that a Basic Output in its simplest possible form with only an Address Unlock Condition and without features or native tokens is functionally equivalent to a SigLockedSingleOutput: it has an address and an IOTA balance. Therefore, aforementioned output type, that was introduced for Chrysalis Part 2 via TIP-7 is deprecated with the replacement of the TIP-20 Transaction Payload.

Basic Output
Describes a basic output with optional features.
Name Type Description
Output Type uint8 Set to value 3 to denote a Basic Output.
Amount uint64 The amount of IOTA coins to held by the output.
Native Tokens Count uint8 The number of native tokens held by the output.
Native Tokens optAnyOf
Native Token
Name Type Description
Token ID ByteArray[38] Identifier of the native token.
Amount uint256 Amount of native tokens of the given Token ID.
Unlock Conditions Count uint8 The number of unlock conditions following.
Unlock Conditions atMostOneOfEach
Address Unlock Condition
Name Type Description
Unlock Condition Type uint8 Set to value 0 to denote an Address Unlock Condition.
Address
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Storage Deposit Return Unlock Condition
Defines the amount of IOTAs used as storage deposit that have to be returned to Return Address.
Name Type Description
Unlock Condition Type uint8 Set to value 1 to denote a Storage Deposit Return Unlock Condition.
Return Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Return Amount uint64 Amount of IOTA coins the consuming transaction should deposit to the address defined in Return Address.
Timelock Unlock Condition
Defines a unix timestamp until which the output can not be unlocked.
Name Type Description
Unlock Condition Type uint8 Set to value 2 to denote a Timelock Unlock Condition.
Unix Time uint32 Unix time (seconds since Unix epoch) starting from which the output can be consumed.
Expiration Unlock Condition
Defines a unix time until which only Address, defined in Address Unlock Condition, is allowed to unlock the output. After the unix time is reached or passed, only Return Address can unlock it.
Name Type Description
Unlock Condition Type uint8 Set to value 3 to denote a Expiration Unlock Condition.
Return Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Unix Time uint32 Before this unix time, Address Unlock Condition is allowed to unlock the output, after that only the address defined in Return Address.
Features Count uint8 The number of features following.
Features atMostOneOfEach
Sender Feature
Identifies the validated sender of the output.
Name Type Description
Feature Type uint8 Set to value 0 to denote a Sender Feature.
Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Type Description
Feature Type uint8 Set to value 2 to denote a Metadata Feature.
Data (uint16)ByteArray Binary data. A leading uint16 denotes its length.
Tag Feature
Defines an indexation tag to which the output can be indexed by additional node plugins.
Name Type Description
Feature Type uint8 Set to value 3 to denote a Tag Feature.
Tag (uint8)ByteArray Binary indexation data. A leading uint8 denotes its length.

Additional Transaction Syntactic Validation Rules

  • Amount field must fulfill the dust protection requirements and must not be 0.
  • Amount field must be ≤ Max IOTA Supply.
  • Native Tokens Count must not be greater than Max Native Tokens Count.
  • Native Tokens must be lexicographically sorted based on Token ID.
  • Each Native Token must be unique in the set of Native Tokens based on its Token ID. No duplicates are allowed.
  • Amount of any Native Token must not be 0.
  • It must hold true that 1Unlock Conditions Count4.
  • Unlock Condition Type of an Unlock Condition must define one of the following types:
    • Address Unlock Condition
    • Storage Deposit Return Unlock Condition
    • Timelock Unlock Condition
    • Expiration Unlock Condition
  • Unlock Conditions must be sorted in ascending order based on their Unlock Condition Type.
  • Syntactic validation of all present unlock conditions must pass.
  • Address Unlock Condition must be present.
  • It must hold true that 0Features Count3.
  • Feature Type of a Feature must define one of the following types:
    • Sender Feature
    • Metadata Feature
    • Tag Feature
  • Features must be sorted in ascending order based on their Feature Type.
  • Syntactic validation of all present features must pass.

Additional Transaction Semantic Validation Rules

Consumed Outputs

  • The unlock of the input must correspond to Address field in the Address Unlock Condition and the unlock must be valid.
  • The unlock is valid if and only if all unlock conditions and features present in the output validate.

Created Outputs

  • All Unlock Condition imposed transaction validation criteria must be fulfilled.
  • All Feature imposed transaction validation criteria must be fulfilled.

Alias Output

The Alias Output is a specific implementation of a UTXO state machine. Alias ID, the unique identifier of an instance of the deployed state machine, is generated deterministically by the protocol and is not allowed to change in any future state transitions.

Alias Output represents an alias account in the ledger with two control levels and a permanent Alias Address. The account owns other outputs that are locked under Alias Address. The account keeps track of state transitions (State Index counter), controlled foundries (Foundry Counter) and anchors the layer 2 state as metadata into the UTXO ledger.

Alias Output
Describes an alias account in the ledger that can be controlled by the state and governance controllers.
Name Type Description
Output Type uint8 Set to value 4 to denote a Alias Output.
Amount uint64 The amount of IOTA coins held by the output.
Native Tokens Count uint8 The number of native tokens held by the output.
Native Tokens optAnyOf
Native Token
Name Type Description
Token ID ByteArray[38] Identifier of the native token.
Amount uint256 Amount of native tokens of the given Token ID.
Alias ID ByteArray[32] Unique identifier of the alias, which is the BLAKE2b-256 hash of the Output ID that created it. Alias Address = Alias Address Type || Alias ID
State Index uint32 A counter that must increase by 1 every time the alias is state transitioned.
State Metadata (uint16)ByteArray Metadata that can only be changed by the state controller. A leading uint16 denotes its length.
Foundry Counter uint32 A counter that denotes the number of foundries created by this alias account.
Unlock Conditions Count uint8 The number of unlock conditions following.
Unlock Conditions atMostOneOfEach
State Controller Address Unlock Condition
Name Type Description
Unlock Condition Type uint8 Set to value 4 to denote an State Controller Address Unlock Condition.
Address
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Governor Address Unlock Condition
Name Type Description
Unlock Condition Type uint8 Set to value 5 to denote an Governor Address Unlock Condition.
Address
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Features Count uint8 The number of features following.
Features atMostOneOfEach
Sender Feature
Identifies the validated sender of the output.
Name Type Description
Feature Type uint8 Set to value 0 to denote a Sender Feature.
Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Type Description
Feature Type uint8 Set to value 2 to denote a Metadata Feature.
Data (uint16)ByteArray Binary data. A leading uint16 denotes its length.
Immutable Features Count uint8 The number of immutable features following. Immutable features are defined upon deployment of the UTXO state machine and are not allowed to change in any future state transition.
Immutable Features atMostOneOfEach
Issuer Feature
Identifies the validated issuer of the UTXO state machine.
Name Type Description
Feature Type uint8 Set to value 1 to denote an Issuer Feature.
Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Type Description
Feature Type uint8 Set to value 2 to denote a Metadata Feature.
Data (uint16)ByteArray Binary data. A leading uint16 denotes its length.

Additional Transaction Syntactic Validation Rules

Output Syntactic Validation

  • Amount field must fulfill the dust protection requirements and must not be 0.
  • Amount field must be ≤ Max IOTA Supply.
  • Native Tokens Count must not be greater than Max Native Tokens Count.
  • Native Tokens must be lexicographically sorted based on Token ID.
  • Each Native Token must be unique in the set of Native Tokens based on its Token ID. No duplicates are allowed.
  • Amount of any Native Token must not be 0.
  • It must hold true that Unlock Conditions Count = 2.
  • Unlock Condition Type of an Unlock Condition must define one of the following types:
    • State Controller Address Unlock Condition
    • Governor Address Unlock Condition
  • Unlock Conditions must be sorted in ascending order based on their Unlock Condition Type.
  • Syntactic validation of all present unlock conditions must pass.
  • It must hold true that 0Features Count2.
  • Feature Type of a Feature in Features must define one of the following types:
    • Sender Feature
    • Metadata Feature
  • It must hold true that 0Immutable Features Count2.
  • Feature Type of a Feature in Immutable Features must define on of the following types:
    • Issuer Feature
    • Metadata Feature
  • Features must be sorted in ascending order based on their Feature Type both in Features and Immutable Features fields.
  • Syntactic validation of all present features must pass.
  • When Alias ID is zeroed out, State Index and Foundry Counter must be 0.
  • length(State Metadata) must not be greater than Max Metadata Length.
  • Address of State Controller Address Unlock Condition and Address of Governor Address Unlock Condition must be different from the alias address derived from Alias ID.

Additional Transaction Semantic Validation Rules

  • Explicit Alias ID: Alias ID is taken as the value of the Alias ID field in the alias output.
  • Implicit Alias ID: When an alias output is consumed as an input in a transaction and Alias ID field is zeroed out while State Index and Foundry Counter are zero, take the BLAKE2b-256 hash of the Output ID of the input as Alias ID.
  • For every non-zero explicit Alias ID on the output side there must be a corresponding alias on the input side. The corresponding alias has the explicit or implicit Alias ID equal to that of the alias on the output side.

Consumed Outputs

Whenever an alias output is consumed in a transaction, it means that the alias is transitioned into its next state. The current state is defined as the consumed alias output, while the next state is defined as the alias output with the same explicit AliasID on the output side. There are two types of transitions: state transition and governance transition.

  • State transition:
    • A state transition is identified by an incremented State Index.
    • The State Index must be incremented by 1.
    • The unlock must correspond to the Address of State Controller Address Unlock Condition.
    • State transition can only change the following fields in the next state:
      • IOTA Amount,
      • Native Tokens,
      • State Index,
      • State Metadata,
      • Foundry Counter and
      • Sender Feature in Features.
    • Foundry Counter field must increase by the number of foundry outputs created in the transaction that map to Alias ID. The Serial Number fields of the created foundries must be the set of natural numbers that cover the open-ended interval between the previous and next values of the Foundry Counter field in the alias output.
    • The created foundry outputs must be sorted in the list of outputs by their Serial Number. Note, that any foundry that maps to Alias ID and has a Serial Number that is less or equal to the Foundry Counter of the input alias is ignored when it comes to sorting.
    • Newly created foundries in the transaction that map to different aliases can be interleaved when it comes to sorting.
  • Governance transition:
    • A governance transition is identified by an unchanged State Index in next state. If there is no alias output on the output side with a corresponding explicit Alias ID, the alias is being destroyed. The next state is the empty state.
    • The unlock must correspond to the Address of Governor Address Unlock Condition.
    • Governance transition must only change the following fields:
      • Address of State Controller Address Unlock Condition,
      • Address of Governor Address Unlock Condition,
      • Metadata Feature and Sender Feature in Features.
    • The Metadata Feature is optional, the governor can put additional info about the chain here, for example chain name, fee structure, supported VMs, list of access nodes, etc., anything that helps clients to fetch info (i.e. account balances) about the layer 2 network.
  • When a consumed alias output has Features defined in Immutable Features and a corresponding alias output on the output side, Immutable Features is not allowed to change.

Created Outputs

  • When Issuer Feature is present in an output and explicit Alias ID is zeroed out, an input with Address field that corresponds to Issuer must be unlocked in the transaction.

Notes

  • Governor Address Unlock Condition field is made mandatory for now to help formal verification. When the same entity is defined for state and governance controllers, the output is self governed. Later, for compression reasons, it is possible to make the governance controller optional and define a self-governed alias as one that does not have the governance Governor Address Unlock Condition set.
  • Indexers and node plugins shall map the alias address of the output derived with Alias ID to the regular address -> output mapping table, so that given an Alias Address, its most recent unspent alias output can be retrieved.

Foundry Output

A foundry output is an output that controls the supply of user defined native tokens. It can mint and melt tokens according to the policy defined in the Token Scheme field of the output. Foundries can only be created and controlled by aliases.

The concatenation of Address || Serial Number || Token Scheme Type fields defines the unique identifier of the foundry, the Foundry ID.

Upon creation of the foundry, the alias defined in the Address field of the Immutable Alias Address Unlock Condition must be unlocked in the same transaction, and its Foundry Counter field must increment. This incremented value defines Serial Number, while the Token Scheme can be chosen freely.

Foundry ID is not allowed to change after deployment, therefore neither Address, nor Serial Number or Token Scheme can change during the lifetime of the foundry.

Foundries control the supply of tokens with unique identifiers, so-called Token IDs. The **Token ID of tokens controlled by a specific foundry is the same as the Foundry ID.

Foundry Output
Describes a foundry output that is controlled by an alias.
Name Type Description
Output Type uint8 Set to value 5 to denote a Foundry Output.
Amount uint64 The amount of IOTA coins to held by the output.
Native Tokens Count uint8 The number of different native tokens held by the output.
Native Tokens optAnyOf
Native Token
Name Type Description
Token ID ByteArray[38] Identifier of the native tokens.
Amount uint256 Amount of native tokens of the given Token ID.
Serial Number uint32 The serial number of the foundry with respect to the controlling alias.
Token Scheme oneOf
Simple Token Scheme
Name Type Description
Token Scheme Type uint8 Set to value 0 to denote an Simple Token Scheme.
Minted Tokens uint256 Amount of tokens minted by this foundry.
Melted Tokens uint256 Amount of tokens melted by this foundry.
Maximum Supply uint256 Maximum supply of tokens controlled by this foundry.
Unlock Conditions Count uint8 The number of unlock conditions following.
Unlock Conditions atMostOneOfEach
Immutable Alias Address Unlock Condition
Name Type Description
Unlock Condition Type uint8 Set to value 6 to denote an Immutable Alias Address Unlock Condition.
Address
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
Features Count uint8 The number of features following.
Features atMostOneOfEach
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Type Description
Feature Type uint8 Set to value 2 to denote a Metadata Feature.
Data (uint16)ByteArray Binary data. A leading uint16 denotes its length.
Immutable Features Count uint8 The number of immutable features following. Immutable features are defined upon deployment of the UTXO state machine and are not allowed to change in any future state transition.
Immutable Features atMostOneOfEach
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Type Description
Feature Type uint8 Set to value 2 to denote a Metadata Feature.
Data (uint16)ByteArray Binary data. A leading uint16 denotes its length.

Additional Transaction Syntactic Validation Rules

Output Syntactic Validation

  • Amount field must fulfill the dust protection requirements and must not be 0.
  • Amount field must be ≤ Max IOTA Supply.
  • Native Tokens Count must not be greater than Max Native Tokens Count.
  • Native Tokens must be lexicographically sorted based on Token ID.
  • Each Native Token must be unique in the set of Native Tokens based on its Token ID. No duplicates are allowed.
  • Amount of any Native Token must not be 0.
  • It must hold true that Unlock Conditions Count = 1.
  • Unlock Condition Type of an Unlock Condition must define one of the following types:
    • Immutable Alias Address Unlock Condition
  • Syntactic validation of all present unlock conditions must pass.
  • It must hold true that 0Features Count1.
  • Feature Type of a Feature in Features must define one of the following types:
    • Metadata Feature
  • It must hold true that 0Immutable Features Count1.
  • Feature Type of a Feature in Immutable Features must define one of the following types:
    • Metadata Feature
  • Syntactic validation of all present features must pass.
  • Token Scheme must define one of the following types:
    • Simple Token Scheme
Simple Token Scheme Syntactic Validation
  • Token Scheme Type of a Simple Token Scheme must be 0.
  • Minted Tokens - Melted Tokens must not be greater than Maximum Supply.
  • Melted Tokens must not be greater than Minted Tokens.
  • Maximum Supply must be larger than zero.

Additional Transaction Semantic Validation Rules

A foundry is essentially a UTXO state machine. A transaction might either create a new foundry with a unique Foundry ID, transition an already existing foundry or destroy it. The current and next states of the state machine are encoded in inputs and outputs respectively.

  • The current state of the foundry with Foundry ID X in a transaction is defined as the consumed foundry output where Foundry ID = X.
  • The next state of the foundry with Foundry ID X in a transaction is defined as the created foundry output where Foundry ID = X.
  • Foundry Diff is the pair of the current and next state of the foundry output in the transaction.
A transaction that...Current StateNext State
Creates the foundryEmptyOutput with Foundry ID
Transitions the foundryInput with Foundry IDOutput with Foundry ID
Destroys the foundryInput with Foundry IDEmpty
  • The foundry output must be unlocked like any other output type where the Address Unlock Condition defines an Alias Address, by transitioning the alias in the very same transaction. See section alias unlocking for more details.
  • When the current state of the foundry with Foundry ID is empty, it must hold true for Serial Number in the next state, that:
    • Foundry Counter(InputAlias) < Serial Number <= Foundry Counter(OutputAlias)
    • An alias can create several new foundries in one transaction. It was written for the alias output that freshly created foundry outputs must be sorted in the list of outputs based on their Serial Number. No duplicates are allowed.
    • The two previous rules make sure that each foundry output produced by an alias has a unique Serial Number, hence each Foundry ID is unique.
  • Native tokens present in a transaction are all native tokens present in inputs and outputs of the transaction. Native tokens of a transaction must be a set based on their Token ID.
  • There must be at most one Token ID in the native token set of the transaction that maps to a specific Foundry ID.
  • When neither Current State nor Next State is empty:
    • Immutable Alias Address Unlock Condition must not change.
    • Serial Number must not change.
    • Token Scheme Type must not change.
    • Features in Immutable Features must not change.
  • Token Scheme Semantic Validation Rules must be fulfilled.

Token Scheme Semantic Validation Rules

Token Scheme Validation takes Token Diff and Foundry Diff and validates if the scheme constraints are respected.

Simple Token Scheme Validation Rules
  • Let Token Diff denote the difference between native token balances of the input and the output side of the transaction of the single Token ID that maps to the Foundry ID. Minting results in excess of tokens on the output side (positive diff), melting results in excess on the input side (negative diff). Now, the following conditions must hold for Token Diff:
    1. When Token Diff > 0
    • Current State(Minted Tokens) + Token Diff = Next State(Minted Tokens).
    • Current State(Melted Tokens) = Next State(Melted Tokens)
    1. When Token Diff < 0, it must hold true that:
    • Current State(Melted Tokens) <= Next State(Melted Tokens)
    • [Next State(Melted Tokens) - Current State(Melted Tokens)] <= |Token Diff|.
    • When Current State(Melted Tokens) != Next State(Melted Tokens), it must be true that Current State(Minted Tokens) = Next State(Minted Tokens)
    1. When Current State is empty, Current State(Minted Tokens) = 0 and Current State(Melted Tokens) = 0.
    2. When Next State is empty, condition 1 and 2 are ignored. It must hold true, that Current State(Minted Tokens) + Token Diff = Current State(Melted Tokens)
  • When neither Current State nor Next State is empty:
    • Maximum Supply field must not change.

Notes

  • A token scheme is a list of hard coded constraints. It is not feasible at the moment to foresee the future needs/requirements of hard coded constraints, so it is impossible to design token schemes as any possible combination of those constraints. A better design would be to have a list of possible constraints (and their related fields) from which the user can choose. The chosen combination should still be encoded as a bitmask inside the Token ID.
  • Additional token schemes will be defined that make use of the Foundry Diff as well, for example validating that a certain amount of tokens can only be minted/melted after a certain date.
  • For now, only token scheme 0 is supported. Additional token schemes will be designed iteratively when the need arises.
  • The Foundry ID of a foundry output should be queryable in indexers, so that given a Foundry ID, the Output ID of the foundry output can be retrieved. Foundry ID behaves like an address that can't unlock anything. While it is not necessarily needed for the protocol, it is needed for client side operations, such as:
    • Retrieving the current state of the foundry.
    • Accessing token metadata in foundry based on Foundry ID/Tokend ID.

NFT Output

Non-fungible tokens in the ledger are implemented with a special output type, the so-called NFTOutput.

Each NFT output gets assigned a unique identifier NFT ID upon creation by the protocol. NFT ID is BLAKE2b-256 hash of the Output ID that created the NFT. The address of the NFT is the concatenation of NFT Address Type || NFT ID.

The NFT may contain immutable metadata set upon creation, and a verified Issuer. The output type supports all non-alias specific (state controller, governor) unlock conditions and optional features so that the output can be sent as a request to smart contract chain accounts.

NFT Output
Describes an NFT output, a globally unique token with metadata attached.
Name Type Description
Output Type uint8 Set to value 6 to denote a NFT Output.
Amount uint64 The amount of IOTA coins held by the output.
Native Tokens Count uint8 The number of native tokens held by the output.
Native Tokens optAnyOf
Native Token
Name Type Description
Token ID ByteArray[38] Identifier of the native token.
Amount uint256 Amount of native tokens of the given Token ID.
NFT ID ByteArray[32] Unique identifier of the NFT, which is the BLAKE2b-256 hash of the Output ID that created it. NFT Address = NFT Address Type || NFT ID
Unlock Conditions Count uint8 The number of unlock conditions following.
Unlock Conditions atMostOneOfEach
Address Unlock Condition
Name Type Description
Unlock Condition Type uint8 Set to value 0 to denote an Address Unlock Condition.
Address
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Storage Deposit Return Unlock Condition
Defines the amount of IOTAs used as storage deposit that have to be returned to Return Address.
Name Type Description
Unlock Condition Type uint8 Set to value 1 to denote a Storage Deposit Return Unlock Condition.
Return Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Return Amount uint64 Amount of IOTA coins the consuming transaction should deposit to the address defined in Return Address.
Timelock Unlock Condition
Defines a unix timestamp until which the output can not be unlocked.
Name Type Description
Unlock Condition Type uint8 Set to value 2 to denote a Timelock Unlock Condition.
Unix Time uint32 Unix time (seconds since Unix epoch) starting from which the output can be consumed.
Expiration Unlock Condition
Defines a unix time until which only Address, defined in Address Unlock Condition, is allowed to unlock the output. After the unix time is reached or passed, only Return Address can unlock it.
Name Type Description
Unlock Condition Type uint8 Set to value 3 to denote a Expiration Unlock Condition.
Return Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Unix Time uint32 Before this unix time, Address Unlock Condition is allowed to unlock the output, after that only the address defined in Return Address.
Features Count uint8 The number of features following.
Features atMostOneOfEach
Sender Feature
Identifies the validated sender of the output.
Name Type Description
Feature Type uint8 Set to value 0 to denote a Sender Feature.
Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Type Description
Feature Type uint8 Set to value 2 to denote a Metadata Feature.
Data (uint16)ByteArray Binary data. A leading uint16 denotes its length.
Tag Feature
Defines an indexation tag to which the output can be indexed by additional node plugins.
Name Type Description
Feature Type uint8 Set to value 3 to denote a Tag Feature.
Tag (uint8)ByteArray Binary indexation data. A leading uint8 denotes its length.
Immutable Features Count uint8 The number of immutable features following. Immutable features are defined upon deployment of the UTXO state machine and are not allowed to change in any future state transition.
Immutable Features atMostOneOfEach
Issuer Feature
Identifies the validated issuer of the UTXO state machine.
Name Type Description
Feature Type uint8 Set to value 1 to denote an Issuer Feature.
Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Type Description
Address Type uint8 Set to value 8 to denote an Alias Address.
Alias ID ByteArray[32] The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Type Description
Address Type uint8 Set to value 16 to denote an NFT Address.
NFT ID ByteArray[32] The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Type Description
Feature Type uint8 Set to value 2 to denote a Metadata Feature.
Data (uint16)ByteArray Binary data. A leading uint16 denotes its length.

Additional Transaction Syntactic Validation Rules

Output Syntactic Validation

  • Amount field must fulfill the dust protection requirements and must not be 0.
  • Amount field must be ≤ Max IOTA Supply.
  • Native Tokens Count must not be greater than Max Native Tokens Count.
  • Native Tokens must be lexicographically sorted based on Token ID.
  • Each Native Token must be unique in the set of Native Tokens based on its Token ID. No duplicates are allowed.
  • Amount of any Native Token must not be 0.
  • It must hold true that 1Unlock Conditions Count4.
  • Unlock Condition Type of an Unlock Condition must define one of the following types:
    • Address Unlock Condition
    • Storage Deposit Return Unlock Condition
    • Timelock Unlock Condition
    • Expiration Unlock Condition
  • Unlock Conditions must be sorted in ascending order based on their Unlock Condition Type.
  • Syntactic validation of all present unlock conditions must pass.
  • Address Unlock Condition must be present.
  • It must hold true that 0Features Count3.
  • Feature Type of a Feature in Features must define one of the following types:
    • Sender Feature
    • Metadata Feature
    • Tag Feature
  • It must hold true that 0Immutable Features Count2.
  • Feature Type of a Feature in Immutable Features must define one of the following types:
    • Issuer Feature
    • Metadata Feature
  • Features must be sorted in ascending order based on their Feature Type both in Features and Immutable Features fields.
  • Syntactic validation of all present features must pass.
  • Address field of the Address Unlock Condition must not be the same as the NFT address derived from NFT ID.

Additional Transaction Semantic Validation Rules

  • Explicit NFT ID: NFT ID is taken as the value of the NFT ID field in the NFT output.
  • Implicit NFT ID: When an NFT output is consumed as an input in a transaction and NFT ID field is zeroed out, take the BLAKE2b-256 hash of the Output ID of the input as NFT ID.
  • For every non-zero explicit NFT ID on the output side there must be a corresponding NFT on the input side. The corresponding NFT has the explicit or implicit NFT ID equal to that of the NFT on the output side.

Consumed Outputs

  • The unlock of the input corresponds to Address field of the Address Unlock Condition and the unlock is valid.
  • The unlock is valid if and only if all unlock conditions and features present in the output validate.
  • When a consumed NFT output has a corresponding NFT output on the output side, Immutable Features field must not change.
  • When a consumed NFT output has no corresponding NFT output on the output side, the NFT it is being burned. Funds and assets inside the burned NFT output must be redistributed to other outputs in the burning transaction.
:bangbang: Careful with NFT burning :bangbang:

Other outputs in the ledger that are locked to the address of the NFT can only be unlocked by including the NFT itself in the transaction. If the NFT is burned, such funds are locked forever. It is strongly advised to always check and sweep what the NFT owns in the ledger before burning it.

Created Outputs

  • When Issuer Feature is present in an output and explicit NFT ID is zeroed out, an input with Address field that corresponds to Issuer must be unlocked in the transaction. If Address is either Alias Address or NFT Address, their corresponding outputs (defined by Alias ID and NFT ID) must be unlocked in the transaction.
  • All Unlock Condition imposed transaction validation criteria must be fulfilled.
  • All Feature imposed transaction validation criteria must be fulfilled.

Notes

  • It would be possible to have two-step issuer verification: First NFT is minted, and then metadata can be immutably locked into the output. The metadata contains an issuer public key plus a signature of the unique NFT ID. This way a smart contract chain can mint on behalf of the user, and then push the issuer signature in a next step.

Unlocking Chain Script Locked Outputs

Two of the introduced output types (Alias, NFT) implement the so-called UTXO chain constraint. These outputs receive their unique identifiers upon creation, generated by the protocol, and carry it forward with them through transactions until they are destroyed. These unique identifiers (Alias ID, NFT ID) also function as global addresses for the state machines, but unlike Ed25519 Addresses, they are not backed by private keys that could be used for signing. The rightful owners who can unlock these addresses are defined in the outputs themselves.

Since such addresses are accounts in the ledger, it is possible to send funds to these addresses. The unlock mechanism of such funds is designed in a way that proving ownership of the address is reduced to the ability to unlock the corresponding output that defines the address.

Alias Locking & Unlocking

A transaction may consume a (non-alias) output that belongs to an Alias Address by state transitioning the alias output with the matching Alias ID. This serves the exact same purpose as providing a signature to unlock an output locked under a private key backed address, such as Ed25519 Addresses.

On protocol level, alias unlocking is done using a new unlock type, called Alias Unlock.

Alias Unlock
Points to the unlock of a consumed alias output.
Name Type Description
Unlock Type uint8 Set to value 2 to denote a Alias Unlock.
Alias Reference Unlock Index uint16 Index of input and unlock corresponding to an alias output.

This unlock is similar to the Reference Unlock. However, it is valid if and only if the input of the transaction at index Alias Reference Unlock Index is an alias output with the same Alias ID as the one derived from the Address field of the to-be unlocked output.

Additionally, the Alias Unlocks must also be ordered to prevent circular dependencies:

If the i-th Unlock of a transaction is an Alias Unlock and has Alias Reference Unlock Index set to k, it must hold that i > k. Hence, an Alias Unlock can only reference an Unlock (unlocking the corresponding alias) at a smaller index.

For example the scenario where Alias A is locked to the address of Alias B while Alias B is in locked to the address of Alias A introduces a circular dependency and is not well-defined. By requiring the Unlocks to be ordered as described above, a transaction consuming Alias A as well as Alias B can never be valid as there would always need to be one Alias Unlock referencing a greater index.

Alias Unlock Syntactic Validation

  • It must hold that 0 ≤ Alias Reference Unlock Index < Max Inputs Count.

Alias Unlock Semantic Validation

  • The address of the unlocking condition of the input being unlocked must be an Alias Address.
  • The index i of the Alias Unlock is the index of the input in the transaction that it unlocks. Alias Reference Unlock Index must be < i.
  • Alias Reference Unlock Index defines a previous input of the transaction and its unlock. This input must be an Alias Output with Alias ID that refers to the Alias Address being unlocked.
  • The referenced Alias Output must be unlocked for state transition.

NFT Locking & Unlocking

NFT ID field is functionally equivalent to Alias ID of an alias output. It is generated the same way, but it can only exist in NFT outputs. Following the same analogy as for alias addresses, NFT addresses are iota addresses that are controlled by whoever owns the NFT output itself.

Outputs that are locked under NFT Address can be unlocked by unlocking the NFT output in the same transaction that defines NFT Address, that is, the NFT output where NFT Address Type Byte || NFT ID = NFT Address.

An NFT Unlock looks and behaves like an Alias Unlock, but the referenced input at the index must be an NFT output with the matching NFT ID.

NFT Unlock
Points to the unlock of a consumed NFT output.
Name Type Description
Unlock Type uint8 Set to value 3 to denote a NFT Unlock.
NFT Reference Unlock Index uint16 Index of input and unlock corresponding to an NFT output.

An NFT Unlock is only valid if the input in the transaction at index NFT Reference Unlock Index is the NFT output with the same NFT ID as the one derived from the Address field of the to-be unlocked output.

If the i-th Unlock of a transaction is an NFT Unlock and has NFT Reference Unlock Index set to k, it must hold that i > k. Hence, an NFT Unlock can only reference an Unlock at a smaller index.

NFT Unlock Syntactic Validation

  • It must hold that 0 ≤ NFT Reference Unlock Index < Max Inputs Count.

NFT Unlock Semantic Validation

  • The address of the input being unlocked must be an NFT Address.
  • The index i of the NFT Unlock is the index of the input in the transaction that it unlocks. NFT Reference Unlock Index must be < i.
  • NFT Reference Unlock Index defines a previous input of the transaction and its unlock. This input must be an NFT Output with NFT ID that refers to the NFT Address being unlocked.

Drawbacks

  • New output types increase transaction validation complexity, however it is still bounded.
  • Outputs take up more space in the ledger, UTXO database size might increase.
  • It is possible to intentionally deadlock aliases and NFTs, however client side software can notify users when they perform such action. Deadlocked aliases and NFTs can not be unlocked, but this is true for any funds locked into unspendable addresses.
  • Time based output locking conditions can only be evaluated after attachment to the Tangle, during milestone confirmation.
  • IOTA ledger can only support hard-coded scripts. Users can not write their own scripts because there is no way currently to charge them based on resource usage, all IOTA transactions are feeless by nature.
  • Aliases can be destroyed even if there are foundries alive that they control. Since only the controlling alias can unlock the foundry, such foundries and the supply of the tokens remain forever locked in the Tangle.
  • Token schemes and needed supply control rules are unclear.

Rationale and alternatives

The feeless nature of IOTA makes it inherently impossible to implement smart contracts on layer 1. A smart contract platform shall not only be capable of executing smart contracts, but also to limit their resource usage and make users pay validators for the used resources. IOTA has no concept of validators, neither fees. While it would technically be possible to run EUTXO smart contracts on the layer 1 Tangle, it is not possible to properly charge users for executing them.

The current design aims to combine the best of both worlds: Scalable and feeless layer 1 and Turing-complete smart contracts on layer 2. Layer 1 remains scalable because of parallel transaction validation, feeless because the bounded hard-coded script execution time, and layer 2 can offer support for all kinds of virtual machines, smart contracts and advanced tokenization use cases.

Unresolved questions

  • List of supported Token Schemes is not complete.
    • Deflationary token scheme
    • Inflationary token scheme with scheduled minting
    • etc.
  • Adapt the current congestion control, i.e. Block PoW, to better match the validation complexity of the different outputs and types.

Copyright and related rights waived via CC0.

tip: 19
title: Dust Protection Based on Byte Costs (Storage Deposit)
description: Prevent bloating the ledger size with dust outputs
author: Max Hase (@muXxer) 
discussions-to: https://github.com/iotaledger/tips/pull/39
status: Active
type: Standards
layer: Core
created: 2021-11-04
requires: TIP-18, TIP-20, TIP-21 and TIP-22
replaces: TIP-15

Summary

The current dust protection in chrysalis-pt2 is only an intermediate solution to prevent attacks or misbehavior that could bloat the ledger database. The design has several drawbacks, e.g. it does not scale, relies on a total ordering of the tangle and it is rather complicated to use from a user point of view.

This document describes a new dust protection concept, called storage deposit, which solves the mentioned drawbacks and creates a monetary incentive to keep the ledger state small. It focuses on the underlying problem, the increase in database size, instead of artificially limiting the number of UTXOs. This is achieved by enforcing a minimum IOTA coin deposit in every output based on the actually used disc space of the output itself.

Motivation

In a distributed ledger network, every participant, a so-called node, needs to keep track of the current ledger state. Since chrysalis-pt2, the IOTA ledger state is based on the UTXO model, where every node keeps track of all the currently unspent outputs. Without dust protection, even outputs containing only one single IOTA coin are valid and therefore stored in the database.

Misusage by honest users or intentionally bad behavior by malicious actors can lead to growing database and snapshot sizes and increasing computational costs (database lookups, balance calculations). Due to these increasing hardware requirements, the entry barrier to participate in the network becomes unaffordable and less nodes would operate the network.

Especially in a fee-less system like IOTA, this is a serious issue, since an attacker can create a lot of damage with low effort. Other DLTs do not yet face this problem, as such an attack would be much more expensive due to the high transaction fees. However, in order to solve scalability issues more and more transactions need to be handled. Therefore, other DLT projects will also eventually run into the same dust limitations. This document proposes to introduce storage deposit to address this.

Requirements

  • The maximum possible ledger database size must be limited to a reasonable and manageable size.
  • The dust protection must not depend on a global shared state of the ledger, so that transaction validation can happen in parallel.
  • The dust protection should work for outputs with arbitrary data and size.
  • The ledger database size should be fairly allocated to users based on the scarce resource, IOTA coins.

Detailed Design

The current dust protection solution in chrysalis-pt2 does not satisfy the mentioned requirements for the following reasons:

  • The enforced maximum limit of disc space is ~6.5 TB.
  • The dust allowance mechanism depends on the total amount of funds in DustAllowanceOutput per address, which is a global shared state.
  • It is designed for one fixed output size.

Therefore, a new transaction validation rule is introduced which replaces the former dust protection solution completely.

Blocks including payloads, even transaction payloads, are considered to be pruned by the nodes, but unspent transaction outputs must be kept until they are spent. Therefore the dust protection is based on the unspent outputs only.

Every output created by a transaction needs to have at least a minimum amount of IOTA coins deposited in the output itself, otherwise the output is syntactically invalid.

min_deposit_of_output = ⌊v_byte_cost · v_byte⌋
v_byte = ∑(weight𝑖 · byte_size𝑖) + offset

where:

  • v_byte_cost: costs in IOTA coins per virtual byte
  • weight𝑖: factor of field 𝑖 that takes computational and storage costs into account
  • byte_size𝑖: size of field 𝑖 in bytes
  • offset: additional v_bytes that are caused by additional data that has to be stored in the database but is not part of the output itself
:warning: min_deposit_of_output is rounded down

Starting with the tokenization and smart contracts mainnet upgrade, new output types are introduced by TIP-18 that contain mandatory and optional fields with variable length. Each of these fields result in different computational and storage costs, which will be considered by the positive weight_i. The size of the field itself is expressed with byte_size_i. offset is used to take the overhead of the specific output itself into account.

The v_byte_cost is a protocol value, which has to be defined based on reasonable calculations and estimates.

In simple words, the more data you write to the global ledger database, the more IOTA you need to deposit in the output. This is not a fee, because the deposited coins can be reclaimed by consuming the output in a new transaction.

Advantages

The proposed solution has several advantages over the former solution.

First of all, the database size is limited to an absolute maximum size. Since the total supply of IOTA coins stays constant, also the maximum amount of v_bytes that can ever be written to the database remains constant.

Total ordering of the tangle is not necessary because there is no shared global ledger state for transaction validation anymore. The node can determine if the transaction is valid and the dust protection rules are fulfilled, just by looking at the transaction itself. Therefore this solution is also suitable for IOTA 2.0.

By introducing a certain cost for every byte stored in the ledger, it is possible to store arbitrary data in the outputs, as long as enough IOTA coins are deposited in the output itself to keep the information retained. This enables permanent storage of data in a distributed and decentralized way, without the need of a permanode.

Users have an economic incentive to clean up the database. By consuming old unused outputs, users can reclaim their deposited IOTA coins.

Drawbacks

This solution prevents seamless microtransactions, which are a unique selling point for IOTA, because the issuer of the transaction always needs to deposit min_deposit_of_output IOTA coins in the output created by the transaction. This minimum deposit will have a higher value than the microtransaction itself, which basically makes microtransactions impossible. Two different solutions to circumvent this obstacle are introduced here.

How does it affect other parts of the protocol?

The dust protection only affects "value-transactions". Since blocks containing other payloads are not stored in the ledger state and are subject to pruning, they cannot cause permanent "dust" and do not need to be considered for dust protection. However, all output types like e.g. smart contract requests are affected and must comply with the min_deposit_of_output criteria. Therefore, these requests could get quite expensive for the user, but the same mechanism introduced for Microtransactions on Layer 1 can be utilized for smart contract requests as well.

Byte cost calculations

To limit the maximum database size, the total IOTA supply needs to be divided by the target database size in bytes to get the worst case scenario regarding the byte costs.

However, in this scenario no outputs hold more IOTA coins than required for the dust protection. This does not represent the real distribution of funds over the UTXOs. We could assume that these output amounts follow Zipf's law. Unfortunately, fitting a Zipf distribution to the current ledger state will not match the future distribution of the funds for several reasons:

  • There is already another dust protection in place, which distorts the distribution.
  • With new use cases enabled by the new dust protection (e.g. tokenization, storing arbitrary data in the ledger), the distribution will dramatically change.
  • Fittings for other DLT projects do not match because there are transaction fees in place, which decrease the amount of dust outputs in the distribution.

Another possibility would be to estimate how much percentage of the database will be used for outputs with minimum required deposit (fund sparsitiy percentage) in the future. The remaining IOTA coins can be ignored in that case to simplify the calculation. Since a fund sparsity percentage of less than 20% would already be bad for other upcoming protocol features like the mana calculation, we could take this value for our calculation instead of the worst case.

Weights for different outputs

The different output types mentioned in the Output Types TIP-18 contain several mandatory and optional fields. Every field itself creates individual computational and storage requirements for the node, which is considered by having different weights for every field.

Field types

The following table describes different field types in an output:

Name Description Weight Reasoning
key Creates a key lookup in the database. 10.0 Keys need to be stored in the LSM tree of the key-value database engine and need to be merged and leveled, which is computational-, memory- and read/write IO-wise a heavy task.
data Plain binary data on disk. 1.0 Data is stored as the value in the key-value database, and therefore only consumes disc space.
:warning: Protocol parameters are not set yet

Protocol parameters presented in this document are design parameters that will change in the future based on simulation results, benchmarking and security assumptions. The reader should not take these values as definitive.

An example of such parameter for example is the weight assigned to different output field types.

Outputs

The following tables show the different outputs including the possible fields and their specific weight.

Basic Output
Describes a basic output with optional features.
Offset
Field Field type Length Minimum Length Maximum Description
OutputID key 34 34 The ID of the output.
Block ID (included) data 32 32 The ID of the block in which the transaction payload that created this output was included.
Confirmation Milestone Index data 4 4 The index of the milestone which confirmed the transaction that created the output.
Confirmation Unix Timestamp data 4 4 The unix timestamp of the milestone which confirmed the transaction that created the output.
Fields
Name Field type Length Minimum Length Maximum Description
Output Type data 1 1 Set to value 3 to denote an Basic Output.
Amount data 8 8 The amount of IOTA coins held by the output.
Native Tokens Count data 1 1 The number of native tokens held by the output.
Native Tokens optAnyOf
Native Token
Name Field type Length Minimum Length Maximum Description
Token ID data 38 38 Identifier of the native token.
Amount data 32 32 Amount of native tokens of the given Token ID.
Unlock Conditions Count data 1 1 The number of unlock conditions following.
Unlock Conditions atMostOneOfEach
Address Unlock Condition
Name Field type Length Minimum Length Maximum Description
Unlock Condition Type data 1 1 Set to value 0 to denote an Address Unlock Condition.
Address
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Storage Deposit Return Unlock Condition
Defines the amount of IOTAs used as storage deposit that have to be returned to Return Address.
Name Field type Length Minimum Length Maximum Description
Unlock Condition Type data 1 1 Set to value 1 to denote a Storage Deposit Return Unlock Condition.
Return Address oneOf
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Return Amount data 8 8 Amount of IOTA coins the consuming transaction should deposit to the address defined in Return Address.
Timelock Unlock Condition
Defines a unix timestamp until which the output can not be unlocked.
Name Field type Length Minimum Length Maximum Description
Unlock Condition Type data 1 1 Set to value 2 to denote a Timelock Unlock Condition.
Unix Time data 4 4 Unix time (seconds since Unix epoch) starting from which the output can be consumed.
Expiration Unlock Condition
Defines a unix time until which only Address, defined in Address Unlock Condition, is allowed to unlock the output. After the unix time is reached/passed, only Return Address can unlock it.
Name Field type Length Minimum Length Maximum Description
Unlock Condition Type data 1 1 Set to value 3 to denote a Expiration Unlock Condition.
Return Address oneOf
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Unix Time data 4 4 Before this unix time, Address Unlock Condition is allowed to unlock the output, after that only the address defined in Return Address.
Features Count data 1 1 The number of features following.
Features atMostOneOfEach
Sender Feature
Identifies the validated sender of the output.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 0 to denote a Sender Feature.
Sender oneOf
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 2 to denote a Metadata Feature.
Data Length data 2 2 Length of the following data field in bytes.
Data data 1 8192 Binary data.
Tag Feature
Defines an indexation tag to which the output can be indexed by additional node plugins.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 3 to denote a Tag Feature.
Tag Length data 1 1 Length of the following tag field in bytes.
Tag data 1 255 Binary indexation data.
v_byte Minimum 426
v_byte Maximum 13477

Alias Output
Describes an alias account in the ledger that can be controlled by the state and governance controllers.
Offset
Field Field type Length Minimum Length Maximum Description
OutputID key 34 34 The ID of the output.
Block ID (included) data 32 32 The ID of the block in which the transaction payload that created this output was included.
Confirmation Milestone Index data 4 4 The index of the milestone which confirmed the transaction that created the output.
Confirmation Unix Timestamp data 4 4 The unix timestamp of the milestone which confirmed the transaction that created the output.
Fields
Name Field type Length Minimum Length Maximum Description
Output Type data 1 1 Set to value 4 to denote a Alias Output.
Amount data 8 8 The amount of IOTA coins held by the output.
Native Tokens Count data 1 1 The number of native tokens held by the output.
Native Tokens optAnyOf
Native Token
Name Field type Length Minimum Length Maximum Description
Token ID data 38 38 Identifier of the native token.
Amount data 32 32 Amount of native tokens of the given Token ID.
Alias ID data 32 32 Unique identifier of the alias, which is the BLAKE2b-256 hash of the Output ID that created it. Alias Address = Alias Address Type || Alias ID
State Index data 4 4 A counter that must increase by 1 every time the alias is state transitioned.
State Metadata Length data 2 2 Length of the following State Metadata field.
State Metadata data 0 8192 Metadata that can only be changed by the state controller.
Foundry Counter data 4 4 A counter that denotes the number of foundries created by this alias account.
Unlock Conditions Count data 1 1 The number of unlock conditions following.
Unlock Conditions atMostOneOfEach
State Controller Address Unlock Condition
Name Field type Length Minimum Length Maximum Description
Unlock Condition Type data 1 1 Set to value 4 to denote an State Controller Address Unlock Condition.
Address
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Governor Address Unlock Condition
Name Field type Length Minimum Length Maximum Description
Unlock Condition Type data 1 1 Set to value 5 to denote an Governor Address Unlock Condition.
Address
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Features Count data 1 1 The number of features following.
Features atMostOneOfEach
Sender Feature
Identifies the validated sender of the output.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 0 to denote a Sender Feature.
Sender oneOf
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 2 to denote a Metadata Feature.
Data Length data 2 2 Length of the following data field in bytes.
Data data 1 8192 Binary data.
Immutable Features Count data 1 1 The number of immutable features following. Immutable features are defined upon deployment of the UTXO state machine and are not allowed to change in any future state transition.
Immutable Features atMostOneOfEach
Issuer Feature
Identifies the validated issuer of the UTXO state machine.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 1 to denote an Issuer Feature.
Issuer oneOf
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 2 to denote a Metadata Feature.
Data Length data 2 2 Length of the following data field in bytes.
Data data 1 8192 Binary data.
v_byte Minimum 469
v_byte Maximum 29633

Foundry Output
Describes a foundry output that is controlled by an alias.
Offset
Field Field type Length Minimum Length Maximum Description
OutputID key 34 34 The ID of the output.
Block ID (included) data 32 32 The ID of the block in which the transaction payload that created this output was included.
Confirmation Milestone Index data 4 4 The index of the milestone which confirmed the transaction that created the output.
Confirmation Unix Timestamp data 4 4 The unix timestamp of the milestone which confirmed the transaction that created the output.
Fields
Name Field type Length Minimum Length Maximum Description
Output Type data 1 1 Set to value 5 to denote a Foundry Output.
Amount data 8 8 The amount of IOTA coins held by the output.
Native Tokens Count data 1 1 The number of different native tokens held by the output.
Native Tokens optAnyOf
Native Token
Name Field type Length Minimum Length Maximum Description
Token ID data 38 38 Identifier of the native token.
Amount data 32 32 Amount of native tokens of the given Token ID.
Serial Number data 4 4 The serial number of the foundry with respect to the controlling alias.
Token Scheme oneOf
Simple Token Scheme
Name Field type Length Minimum Length Maximum Description
Token Scheme Type data 1 1 Set to value 0 to denote an Simple Token Scheme.
Minted Tokens data 32 32 Amount of tokens minted by this foundry.
Melted Tokens data 32 32 Amount of tokens melted by this foundry.
Maximum Supply data 32 32 Maximum supply of tokens controlled by this foundry.
Unlock Conditions Count data 1 1 The number of unlock conditions following.
Unlock Conditions atMostOneOfEach
Immutable Alias Address Unlock Condition
Name Field type Length Minimum Length Maximum Description
Unlock Condition Type data 1 1 Set to value 6 to denote an Immutable Alias Address Unlock Condition.
Address
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
Features Count data 1 1 The number of features following.
Features atMostOneOfEach
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 2 to denote a Metadata Feature.
Data Length data 2 2 Length of the following data field in bytes.
Data data 1 8192 Binary data.
Immutable Features Count data 1 1 The number of immutable features following. Immutable features are defined upon deployment of the UTXO state machine and are not allowed to change in any future state transition.
Immutable Features atMostOneOfEach
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 2 to denote a Metadata Feature.
Data Length data 2 2 Length of the following data field in bytes.
Data data 1 8192 Binary data.
v_byte Minimum 528
v_byte Maximum 21398

NFT Output
Describes an NFT output, a globally unique token with metadata attached.
Offset
Field Field type Length Minimum Length Maximum Description
OutputID key 34 34 The ID of the output.
Block ID (included) data 32 32 The ID of the block in which the transaction payload that created this output was included.
Confirmation Milestone Index data 4 4 The index of the milestone which confirmed the transaction that created the output.
Confirmation Unix Timestamp data 4 4 The unix timestamp of the milestone which confirmed the transaction that created the output.
Fields
Name Field type Length Minimum Length Maximum Description
Output Type data 1 1 Set to value 6 to denote a NFT Output.
Amount data 8 8 The amount of IOTA coins held by the output.
Native Tokens Count data 1 1 The number of native tokens held by the output.
Native Tokens optAnyOf
Native Token
Name Field type Length Minimum Length Maximum Description
Token ID data 38 38 Identifier of the native token.
Amount data 32 32 Amount of native tokens of the given Token ID.
NFT ID data 32 32 Unique identifier of the NFT, which is the BLAKE2b-256 hash of the Output ID that created it. NFT Address = NFT Address Type || NFT ID
Unlock Conditions Count data 1 1 The number of unlock conditions following.
Unlock Conditions atMostOneOfEach
Address Unlock Condition
Name Field type Length Minimum Length Maximum Description
Unlock Condition Type data 1 1 Set to value 0 to denote an Address Unlock Condition.
Address
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Storage Deposit Return Unlock Condition
Defines the amount of IOTAs used as storage deposit that have to be returned to Return Address.
Name Field type Length Minimum Length Maximum Description
Unlock Condition Type data 1 1 Set to value 1 to denote a Storage Deposit Return Unlock Condition.
Return Address oneOf
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Return Amount data 8 8 Amount of IOTA coins the consuming transaction should deposit to the address defined in Return Address.
Timelock Unlock Condition
Defines a unix timestamp until which the output can not be unlocked.
Name Field type Length Minimum Length Maximum Description
Unlock Condition Type data 1 1 Set to value 2 to denote a Timelock Unlock Condition.
Unix Time data 4 4 Unix time (seconds since Unix epoch) starting from which the output can be consumed.
Expiration Unlock Condition
Defines a unix time until which only Address, defined in Address Unlock Condition, is allowed to unlock the output. After the unix time is reached/passed, only Return Address can unlock it.
Name Field type Length Minimum Length Maximum Description
Unlock Condition Type data 1 1 Set to value 3 to denote a Expiration Unlock Condition.
Return Address oneOf
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Unix Time data 4 4 Before this unix time, Address Unlock Condition is allowed to unlock the output, after that only the address defined in Return Address.
Features Count data 1 1 The number of features following.
Features atMostOneOfEach
Sender Feature
Identifies the validated sender of the output.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 0 to denote a Sender Feature.
Sender oneOf
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 2 to denote a Metadata Feature.
Data Length data 2 2 Length of the following data field in bytes.
Data data 1 8192 Binary data.
Tag Feature
Defines an indexation tag to which the output can be indexed by additional node plugins.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 3 to denote a Tag Feature.
Tag Length data 1 1 Length of the following tag field in bytes.
Tag data 1 255 Binary indexation data.
Immutable Features Count data 1 1 The number of immutable features following. Immutable features are defined upon deployment of the UTXO state machine and are not allowed to change in any future state transition.
Immutable Features atMostOneOfEach
Issuer Feature
Identifies the validated issuer of the UTXO state machine.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 1 to denote an Issuer Feature.
Issuer oneOf
Ed25519 Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 0 to denote an Ed25519 Address.
PubKeyHash data 32 32 The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Alias Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 8 to denote an Alias Address.
Alias ID data 32 32 The raw bytes of the Alias ID which is the BLAKE2b-256 hash of the outputID that created it.
NFT Address
Name Field type Length Minimum Length Maximum Description
Address Type data 1 1 Set to value 16 to denote an NFT Address.
NFT ID data 32 32 The raw bytes of the NFT ID which is the BLAKE2b-256 hash of the outputID that created it.
Metadata Feature
Defines metadata (arbitrary binary data) that will be stored in the output.
Name Field type Length Minimum Length Maximum Description
Feature Type data 1 1 Set to value 2 to denote a Metadata Feature.
Data Length data 2 2 Length of the following data field in bytes.
Data data 1 8192 Binary data.
v_byte Minimum 459
v_byte Maximum 21739

Microtransactions

Microtransactions on Layer 1

To enable microtransactions on Layer 1 and still satisfy the min_deposit_of_output requirement, a new mechanism called conditional sending is introduced with the new Output Types TIP-18.

Microtransactions on Layer 1

The preceding picture shows the process of the conditional sending mechanism. Alice uses the Basic Output to send a microtransaction of 1 IOTA to Bob's Address. To fulfill the min_deposit_of_output requirement, the Amount is increased by min_deposit_of_output IOTA, which is 1 MIOTA in the above example. To prevent Bob from accessing these additional funds called the storage deposit, Alice adds the optional Storage Deposit Return Unlock Condition to the Basic Output. Now Bob can only consume the newly created output, if the unlocking transaction deposits the specified Return Amount IOTA coins, in this case 1 MIOTA, to the Return Address value defined by Alice. By consuming another UTXO and adding its amount to the received 1 IOTA, Bob takes care to create a valid output according to the dust protection rules.

To prevent Bob from blocking access to the storage deposit forever, Alice specifies the additional Expiration Unlock Condition in the Basic Output. If Bob does not consume the output before the time window defined by Alice expires, Alice regains total control over the output.

This means that there is no risk for Alice to lose the storage deposit, because either Bob needs to return the specified Return Amount, or the ownership of the created output switches back to Alice after the specified time-window has expired.

This mechanism can also be used to transfer native tokens or on-chain requests to ISCP chains without losing control over the required storage deposit.

Microtransactions on Layer 2

Another solution is to outsource microtransactions to Layer 2 applications like smart contracts. In Layer 2 there are no restrictions regarding the minimum balance of an output.

Microtransactions on Layer 2

In this example, Alice sends funds to a smart contract chain on Layer 1 with an output that covers at least min_deposit_of_output. From this point on, Alice can send any number of off-chain requests to the smart contract chain, causing the smart contract to send microtransactions from Alice' on-chain account to Bob's on-chain account. Bob can now request his on-chain account balances to be withdrawn to his Layer 1 address. The last step can also be combined with the formerly introduced conditional sending mechanism, in case Bob wants to withdraw less than min_deposit_of_output IOTA coins or native assets.

:information_source: Potential additional mechanisms for microtransactions are currently being discussed.

Migration from old to new dust protection

All SigLockedSingleOutput below 1 MIOTA and SigLockedDustAllowanceOutput of an address could be collected and migrated to a single new BasicOutput with the smallest Output ID (byte-wise) of all these collected outputs as the new identifier.

This could probably be done in the form of a global snapshot and would represent a hard-fork.

Another solution is to convert all SigLockedDustAllowanceOutput into BasicOutputs and leave the SigLockedSingleOutput below 1 MIOTA untouched.

Copyright and related rights waived via CC0.

tip: 20
title: Transaction Payload with TIP-18 Output Types
description: Add output types, unlocks, and output features from TIP-18 into Transaction Payload
author: Levente Pap (@lzpap) 
discussions-to: https://github.com/iotaledger/tips/pull/40
status: Active
type: Standards
layer: Core
created: 2021-11-18
requires: 18
replaces: 7

Summary

This TIP proposes a UTXO-based transaction structure consisting of all the inputs and outputs of a transfer. Specifically, this TIP defines a transaction payload for blocks described in TIP-24 and extends the transaction payload described in TIP-7.

Motivation

TIP-7 describes the introduction of the UTXO ledger model for Chrysalis. This TIP extends the transaction model of the UTXO ledger to:

  • accommodate for the new output types introduced in TIP-18,
  • include a Network ID field in the transaction for replay protection,
  • introduce Inputs Commitment field to prevent client eclipse attacks that would result in loss of funds,
  • relax syntactic validation rules such that inputs and outputs of a transaction are no longer lexicographically ordered, furthermore outputs do not have to be unique.

The motivation for such changes is to provide a more flexible and secure framework for wallets and layer 2 applications. Chrysalis focused solely on using the ledger as a payment application, while Stardust transforms the ledger into a settlement layer for interconnected layer 2 blockchains and applications.

Detailed design

UTXO

The unspent transaction output (UTXO) model defines a ledger state where balances are not directly associated to addresses but to the outputs of transactions. In this model, transactions reference outputs of previous transactions as inputs, which are consumed (removed) to create new outputs. A transaction must consume all the funds of the referenced inputs.

Using a UTXO-based model provides several benefits:

  • Parallel validation of transactions.
  • Easier double-spend detection, since conflicting transactions would reference the same UTXO.
  • Replay-protection which is important when having reusable addresses. Replaying the same transaction would manifest itself as already being applied or existent and thus not have any impact.
  • Balances are no longer strictly associated to addresses. This allows a higher level of abstraction and thus enables other types of outputs with particular unlock criteria.

Within a transaction using UTXOs, inputs and outputs make up the to-be-signed data of the transaction. The section unlocking the inputs is called the unlock. An unlock may contain a signature proving ownership of a given input's address and/or other unlock criteria.

The following image depicts the flow of funds using UTXO:

UTXO flow

Structure

Serialized layout

A Transaction Payload is made up of two parts:

  1. The Transaction Essence part which contains the inputs, outputs and an optional embedded payload.
  2. The Unlocks which unlock the inputs of the Transaction Essence.

The serialized form of the transaction is deterministic, meaning the same logical transaction always results in the same serialized byte sequence. However, in contrast to Chrysalis Phase 2 TIP-7 the inputs and outputs are considered as lists. They can contain duplicates and their serialization order matches the order of the list; they do not need to be sorted.

The Transaction Payload ID is the BLAKE2b-256 hash of the entire serialized payload data including unlocks.

The following table describes the entirety of a Transaction Payload in its serialized form following the notation from TIP-21:

Name Type Description
Payload Type uint32 Set to value 6 to denote a TIP-20 Transaction Payload.
Essence oneOf
Transaction Essence
Describes the essence data making up a transaction by defining its inputs, outputs and an optional payload.
Name Type Description
Transaction Type uint8 Set to value 1 to denote a TIP-20 Transaction Essence.
Network ID uint64 The unique value denoting whether the block was meant for mainnet, shimmer, testnet, or a private network. It consists of the first 8 bytes of the BLAKE2b-256 hash of the network name.
Inputs Count uint16 The number of input entries.
Inputs anyOf
UTXO Input
Describes an input which references an unspent transaction output to consume.
Name Type Description
Input Type uint8 Set to value 0 to denote an TIP-20 UTXO Input.
Transaction ID ByteArray[32] The BLAKE2b-256 hash of the transaction payload containing the referenced output.
Transaction Output Index uint16 The output index of the referenced output.
Inputs Commitment ByteArray[32] BLAKE2b-256 hash serving as a commitment to the serialized outputs referenced by Inputs.
Outputs Count uint16 The number of output entries.
Outputs anyOf
Basic Output
Describes a deposit to a single address. The output might contain optional features and native tokens.
Alias Output
Describes an alias account in the ledger.
Foundry Output
Describes a foundry that controls supply of native tokens.
NFT Output
Describes a unique, non-fungible token deposit to a single address.
Payload Length uint32 The length in bytes of the optional payload.
Payload optOneOf
Tagged Data Payload
Describes data with optional tag, defined in TIP-23.
Unlocks Count uint16 The number of unlock entries. It must match the field Inputs Count.
Unlocks anyOf
Signature Unlock
Defines an unlock containing a signature.
Reference Unlock
References a previous unlock, where the same unlock can be used for multiple inputs.
Alias Unlock
References a previous unlock of a consumed alias output.
NFT Unlock
References a previous unlock of a consumed NFT output.

Transaction Essence

The Transaction Essence of a Transaction Payload carries the inputs, outputs, and an optional payload. The Transaction Essence is an explicit type and therefore starts with its own Transaction Essence Type byte which is of value 1 for TIP-20 Transaction Essence.

Network ID

The Network ID field of the transaction essence serves as a replay protection mechanism. It is a unique value denoting whether the transaction was meant for the IOTA mainnet, shimmer, testnet-1, or a private network. It consists of the first 8 bytes of the BLAKE2b-256 hash of the Network Name protocol parameter, interpreted as an unsigned integer number.

Network NameResulting Network IDNetwork Name defined in
iota-mainnet9374574019616453254TIP-22
shimmer14364762045254553490TIP-32
testnet-11856588631910923207-
example-mynetwork1967754805504104511-

Inputs

The Inputs field holds the inputs to consume in order to fund the outputs of the Transaction Payload. Currently, there is only one type of input, the UTXO Input. In the future, more types of inputs may be specified as part of protocol upgrades.

Each input must be accompanied by a corresponding Unlock at the same index in the Unlocks part of the Transaction Payload.

UTXO Input

A UTXO Input is an input which references an unspent output of a previous transaction. This UTXO is uniquely identified by its Output ID, defined by the Transaction ID of the creating transaction together with corresponding output index. Each UTXO Input must be accompanied by an Unlock that is allowed to unlock the referenced output.

Inputs Commitment

The Inputs Commitment field of the Transaction Essence is a cryptographic commitment to the content of the consumed outputs (inputs). It consists of the BLAKE2b-256 hash of the concatenated output hashes.

In the Inputs field, they are only referenced by Output ID. While the Output ID technically depends on the content of the actual output, a client has no way of validating this without access to the original transaction. For the Inputs Commitment, the client has to be aware of the outputs’ content in order to produce a semantically valid transaction. This protects clients against eclipse attacks that would result in loss of funds.

Outputs

The Outputs field holds the outputs that are created by the Transaction Payload. There are different output types, but they must all have an Amount field denoting the number of IOTA coins to deposit.

The following table lists all the output types that are currently supported as well as links to the corresponding specification. The SigLockedSingleOutput as well as the SigLockedDustAllowanceOutput introduced in Chrysalis Phase 2 TIP-7 have been removed and are no longer supported.

Output NameType ValueTIP
Basic3TIP-18
Alias4TIP-18
Foundry5TIP-18
NFT6TIP-18

Payload

The Transaction Essence itself can contain another payload as described in general in TIP-24. The semantic validity of the encapsulating Transaction Payload does not have any impact on the payload.

The following table lists all the payload types that can be nested inside a Transaction Essence as well as links to the corresponding specification:

NameType ValueTIP
Tagged Data5TIP-23

Unlocks

The Unlocks field holds the unlocks unlocking inputs within a Transaction Essence.

The following table lists all the output types that are currently supported as well as links to the corresponding specification. The Signature Unlock and the Reference Unlock are specified as part of this TIP.

Unlock NameType ValueTIP
Signature0TIP-20
Reference1TIP-20
Alias2TIP-18
NFT3TIP-18

Signature Unlock

The Signature Unlock defines an Unlock which holds a signature signing the BLAKE2b-256 hash of the Transaction Essence (including the optional payload). It is serialized as follows:

Name Type Description
Unlock Type uint8 Set to value 0 to denote a Signature Unlock.
Signature oneOf
Ed25519 Signature
Name Type Description
Signature Type uint8 Set to value 0 to denote an Ed25519 Signature.
Public key ByteArray[32] The Ed25519 public key of the signature.
Signature ByteArray[64] The Ed25519 signature signing the Blake2b-256 hash of the serialized Transaction Essence.
Unlock syntactic validation
  • Signature must contain an Ed25519 Signature.
  • The Signature Unlock must be unique, i.e. there must not be any other Signature Unlocks in the Unlocks field of the transaction payload with the same signature.

Reference Unlock

The Reference Unlock defines an Unlock which references a previous Unlock (which must not be another Reference Unlock). It must be used if multiple inputs can be unlocked via the same Unlock. It is serialized as follows:

Name Type Description
Unlock Type uint8 Set to value 1 to denote a Reference Unlock.
Reference uint16 Represents the index of a previous unlock.
Unlock syntactic validation
  • The Reference Unlock at index i must have Reference < i and the unlock at index Reference must be a Signature Unlock.

Example: Consider a Transaction Essence containing the UTXO Inputs 0, 1 and 2, where 0 and 2 are both spending outputs belonging to the same Ed25519 address A and 1 is spending from a different address B. This results in the following structure of the Unlocks part:

IndexUnlock
0A Signature Unlock holding the Ed25519 signature for address A.
1A Signature Unlock holding the Ed25519 signature for address B.
2A Reference Unlock which references 0, as both require the same signature for A.

Validation

A Transaction Payload has different validation stages, since some validation steps can only be executed when certain information has (or has not) been received. We therefore distinguish between syntactic and semantic validation.

The different output types and optional output features introduced by TIP-18 add additional constraints to the transaction validation rules, but since these are specific to the given outputs and features, they are discussed for each output type and feature type separately.

Syntactic validation

Syntactic validation is checked as soon as the transaction has been received. It validates the structure but not the signatures of the transaction. If the transaction does not pass this stage, it must not be broadcast further and can be discarded right away.

The following criteria defines whether a payload passes the syntactical validation:

  • Essence:
    • Transaction Type value must denote a TIP-20 Transaction Essence.
    • Network ID must match the value of the current network.
    • Inputs:
      • Inputs Count must be 0 < x ≤ Max Inputs Count.
      • For each input the following must be true:
        • Input Type must denote a UTXO Input.
        • Transaction Output Index must be 0 ≤ x < Max Outputs Count.
      • Each pair of Transaction ID and Transaction Output Index must be unique in the list of inputs.
    • Outputs:
      • Outputs Count must be 0 < x ≤ Max Outputs Count.
      • For each output the following must be true:
        • Output Type must match one of the values described under Outputs.
        • The output itself must pass syntactic validation.
      • The sum of all Amount fields must not exceed Max IOTA Supply.
      • The count of all distinct native tokens present in outputs must not be larger than Max Native Token Count.
    • Payload (if present):
      • Payload Type must match one of the values described under Payload.
      • Payload fields must be correctly parsable in the context of the Payload Type.
      • The payload itself must pass syntactic validation.
  • Unlocks:
    • Unlocks Count must match Inputs Count of the Transaction Essence.
    • For each unlock the following must be true:
      • Each Unlock Type must match one of the values described under Unlocks.
      • The unlock itself must pass syntactic validation.
  • Given the type and length information, the Transaction Payload must consume the entire byte array of the Payload field of the encapsulating object.

Semantic validation

The Semantic validation of a Transaction Payload is performed when its encapsulating block is confirmed by a milestone. The semantic validity of transactions depends on the order in which they are processed. Thus, it is necessary that all the nodes in the network perform the checks in the same order, no matter the order in which the transactions are received. This is assured by using the White-Flag ordering as described in TIP-2.

Processing transactions according to the White-Flag ordering enables users to spend UTXOs which are created in the same milestone confirmation cone, as long as the spending transaction comes after the funding transaction in the aforementioned White-Flag order. In this case, it is recommended that users include the Block ID of the funding transaction as a parent of the block containing the spending transaction.

The following criteria defines whether a payload passes the semantic validation:

  • Each input must reference a valid UTXO, i.e. the output referenced by the input's Transaction ID and Transaction Output Index is known (booked) and unspent.
  • Inputs Commitment must equal BLAKE2( BLAKE2(O1) || … || BLAKE2(On) ), where O1, ..., On are the complete serialized outputs referenced by the Inputs field in that order.
  • The transaction must spend the entire coin balance, i.e. the sum of the Amount fields of all the UTXOs referenced by inputs must match the sum of the Amount fields of all outputs.
  • The count of all distinct native tokens present in the UTXOs referenced by inputs and in the transaction outputs must not be larger than Max Native Token Count. A native token that occurs several times in both inputs and outputs is counted as one.
  • The transaction is balanced in terms of native tokens, when the amount of native tokens present in all the UTXOs referenced by inputs equals to that of outputs. When the transaction is imbalanced, it must hold true that when there is a surplus of native tokens on the:
    • output side of the transaction: the foundry outputs controlling outstanding native token balances must be present in the transaction. The validation of the foundry output(s) determines if the minting operations are valid.
    • input side of the transaction: the transaction destroys tokens. The presence and validation of the foundry outputs of the native tokens determines whether the tokens are burned (removed from the ledger) or melted within the foundry. When the foundry output is not present in the transaction, outstanding token balances must be burned.
  • Each output and all its output features must pass semantic validation in the context of the following input:
    1. The Transaction Payload,
    2. the list of UTXOs referenced by inputs and
    3. the Unix timestamp of the confirming milestone.
  • Each unlock must be valid with respect to the UTXO referenced by the input of the same index:
    • If it is a Signature Unlock:
      • The Signature Type must match the Address Type of the UTXO,
      • the BLAKE2b-256 hash of Public Key must match the Address of the UTXO and
      • the Signature field must contain a valid signature for Public Key.
    • If it is a Reference Unlock, the referenced Signature Unlock must be valid with respect to the UTXO.
    • If it is an Alias Unlock:
      • The address unlocking the UTXO must be an Alias Address.
      • The referenced Unlock unlocks the alias defined by the unlocking address of the UTXO.
    • If it is an NFT Unlock:
      • The address unlocking the UTXO must be a NFT Address.
      • The referenced Unlock unlocks the NFT defined by the unlocking address of the UTXO.

If a Transaction Payload passes the semantic validation, its referenced UTXOs must be marked as spent and its new outputs must be created/booked in the ledger. The Block ID of the block encapsulating the processed payload then also becomes part of the input for the White-Flag Merkle tree hash of the confirming milestone (TIP-4).

Transactions that do not pass semantic validation are ignored. Their UTXOs are not marked as spent and their outputs are not booked in the ledger.

Miscellaneous

Transaction timestamps

Since transaction timestamps – whether they are signed or not – do not provide any guarantee of correctness, they have been left out of the Transaction Payload. Instead, the global timestamp of the confirming milestone (TIP-8) is used.

Address reuse

While, in contrast to Winternitz one-time signatures (W-OTS), producing multiple Ed25519 signatures for the same private key and address does not decrease its security, it still drastically reduces the privacy of users. It is thus considered best practice that applications and services create a new address per deposit to circumvent these privacy issues.

In essence, Ed25519 support allows for smaller transaction sizes and to safely spend funds which were sent to an already used deposit address. Ed25519 addresses are not meant to be used like email addresses. See this Bitcoin wiki article for further information.

Drawbacks

  • The new transaction format is the core data type within the IOTA ecosystem. Changing it means that all projects need to accommodate it, including wallets, web services, client libraries and applications using IOTA in general. It is not possible to keep these changes backwards compatible, meaning that all nodes must upgrade to further participate in the network.
  • It is not possible to produce a valid transaction without having access to the content of the consumed outputs.

Rationale and alternatives

  • Inputs Commitment and Network ID are both explicit fields of the transaction, while they could be made configuration parameters for the signature generating process. In this scenario the signature would be invalid if the parameters on client and network side mismatch. While this would reduce the size of a transaction, it would make it impossible to debug the reason for having an invalid signature and transaction. With the current solution we intend to optimize for ease of development.
  • Uniqueness of all inputs is kept as it prevents introducing double spends in the same transaction.

Copyright

Copyright and related rights waived via CC0.

tip: 21
title: Serialization Primitives
description: Introduce primitives to describe the binary serialization of objects.
author: Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/41
status: Active
type: Standards
layer: Core
created: 2021-11-22

Summary

This document introduces the primitives and concepts that are used throughout the IOTA protocol RFCs to describe the binary serialization of objects.

Motivation

Prior to this document, each RFC contained its own section and version describing the serialization of its objects. This RFC introduces consistent serialization concepts and avoids duplication in other RFCs.

Detailed design

Schemas

Serializable objects are represented by a schema. Each schema consists of a list of fields, which each have a name and a type. The type of a field can either be a simple data type or another schema, then called subschema.

Data types

All the supported data types are described in the following table:

NameDescription
uint8An unsigned 8-bit integer encoded in Little Endian.
uint16An unsigned 16-bit integer encoded in Little Endian.
uint32An unsigned 32-bit integer encoded in Little Endian.
uint64An unsigned 64-bit integer encoded in Little Endian.
uint256An unsigned 256 bits integer encoded in Little Endian.
ByteArray[N]A static size byte array of N bytes.
(uint8)ByteArrayA dynamically sized byte array. A leading uint8 denotes its length in bytes.
(uint16)ByteArrayA dynamically sized byte array. A leading uint16 denotes its length in bytes.
(uint32)ByteArrayA dynamically sized byte array. A leading uint32 denotes its length in bytes.

Subschemas

In order to create complex schemas, one or multiple subschemas can be included into an outer schema. The keywords that describe the allowed combinations of such subschemas is described in the following table:

NameDescription
oneOfOne of the listed subschemas.
optOneOfOne of the listed subschemas or none.
anyOfAny (one or more) of the listed subschemas.
optAnyOfAny (one or more) of the listed subschemas or none.
atMostOneOfEachAt most one (none or one) of each of the listed subschemas.

Copyright

Copyright and related rights waived via CC0.

tip: 22
title: IOTA Protocol Parameters
description: Describes the global protocol parameters for the IOTA protocol
author: Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/43
status: Active
type: Standards
layer: Core
created: 2021-11-29

Summary

This TIP describes the global protocol parameters for the IOTA protocol.

Motivation

Various other protocol TIPs rely on certain constants that need to be defined for an actual implementations of nodes or other applications using the protocol. This TIP serves as a single document to provide these parameters. It also serves as a historical record of protocol parameter changes.

Detailed design

NameValueDescription
Network Name"iota-mainnet"Identifier string of the network. Its hash it used for the Network ID field in transactions.
Protocol Version2Protocol version currently used by the network
Max Block Length32768Maximum length of a block in bytes. Limits Tangle storage size and communication costs.
Max Parents Count8Maximum number of parents of a block.
Min PoW Score1500.0Minimum PoW score for blocks to pass syntactic validation.
First Milestone Index1First valid milestone index.
Max IOTA Supply4600000000000000Total amount of IOTA coins in circulation.
Max Inputs Count128Maximum number of inputs in a transaction payload.
Max Outputs Count128Maximum number of outputs in a transaction payload.
Max Native Token Count64Maximum number of different native tokens that can be referenced in one transaction.
Max Tag Length64Maximum length of a Tag field in bytes.
Max Metadata Length8192Maximum length of a Metadata field in bytes.
VByte Cost250Minimum amount of IOTA that need to be deposited per vbyte of an output.
SLIP-44 Coin Type (decimal)4218Registered coin type (decimal) for usage in level 2 of BIP44 described in chapter "Coin type".
SLIP-44 Path Component (coin_type')0x8000107aRegistered path component for usage in level 2 of BIP44 described in chapter "Coin type".
Bech32 Human-Readable PartiotaHRP prefix to use for Bech32 encoded IOTA addresses. (e.g. iota1zzy3drvj6zugek60srqwhqctkjldx3qle5yuvapj)
Bech32 Human-Readable Part (Test)atoiHRP prefix to use for Bech32 encoded IOTA addresses on test- or development networks. (e.g. atoi1zzy3drvj6zugek60srqwhqctkjldx3qle5fhvhm6)

Rationale for parameter choices

Transaction and block limits

The block parameter Max Block Length and Max Parent Count, as well as the transaction parameters Max Inputs Count, Max Outputs Count, Max Native Token Count, Max Tag Length and Max Metadata Length govern the block and transaction validity. Their values have been chosen to ensure functionality of the protocol within constrained resource restrictions. Furthermore, choosing more conservatives values here is preferable as increasing such limits can always been done preserving backward compatibility.

Dust protection

The VByte Cost is the core parameter of the dust protection. The reasoning behind its value is explained in TIP-19 Dust Protection.

Copyright

Copyright and related rights waived via CC0.

tip: 23
title: Tagged Data Payload
description: Block payload for arbitrary data
author: Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/54
status: Active
type: Standards
layer: Core
created: 2022-01-24

Abstract

The payload concept offers a very flexible way to combine and encapsulate information in the IOTA protocol. This document proposes a basic payload type that allows the addition of arbitrary data.

Motivation

The most flexible way to extend an existing object is by the addition of arbitrary data. This payload provides a way to do just that. An optional tag can be used to categorize the data.

Specification

Serialized Layout

The following table describes the serialization of a Tagged Data Payload following the notation from TIP-21:

NameTypeDescription
Payload Typeuint32Set to value 5 to denote an Tagged Data Payload.
Tag(uint8)ByteArrayThe tag of the data. A leading uint8 denotes its length.
Data(uint32)ByteArrayBinary data. A leading uint32 denotes its length.

It is important to note that Tag is not considered by the protocol, it just serves as a marker for second layer applications.

Syntactic Validation

  • length(Tag) must not be larger than Max Tag Length.
  • Given the type and length information, the Tagged Data Payload must consume the entire byte array of the Payload field of the encapsulating object.

Rationale

As the tag is not considered by the protocol, it could also be removed completely. However, Legacy IOTA and Chrysalis supported sending of arbitrary data indexed with a tag. Thus, in order to simplify the migration of second layer applications using these protocols, the optional Tag has been added which can be used in a similar manner.

Copyright

Copyright and related rights waived via CC0.

tip: 24
title: Tangle Block
description: Generalization of the Tangle transaction concept
author: Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/55
status: Active
type: Standards
layer: Core
replaces: 6
created: 2022-01-24

Abstract

The Tangle is the graph data structure behind IOTA. In the legacy IOTA protocol, the vertices of the Tangle are represented by transactions. This document proposes an abstraction of this idea where the vertices are generalized blocks, which then contain the transactions or other structures that are processed by the IOTA protocol. Just as before, each block directly approves other blocks, which are known as parents.

The blocks can contain payloads. These are core payloads that will be processed by all nodes as part of the IOTA protocol. Some payloads may have other nested payloads embedded inside. Hence, parsing is done layer by layer.

Motivation

To better understand this layered design, consider the Internet Protocol (IP), for example: There is an Ethernet frame that contains an IP payload. This in turn contains a TCP packet that encapsulates an HTTP payload. Each layer has a certain responsibility and once this responsibility is completed, we move on to the next layer.

The same is true with how blocks are parsed. The outer layer of the block enables the mapping of the block to a vertex in the Tangle and allows us to perform some basic validation. The next layer may be a transaction that mutates the ledger state, and one layer further may provide some extra functionality on the transactions to be used by applications.

By making it possible to add and exchange payloads, an architecture is being created that can easily be extended to accommodate future needs.

Specification

Block ID

The Block ID is the BLAKE2b-256 hash of the entire serialized block.

Serialized Layout

The following table describes the serialization of a Block following the notation from TIP-21:

Name Type Description
Protocol Version uint8 Protocol version number of the block.
Parents Count uint8 The number of blocks that are directly approved.
Parents anyOf
Parent
References another directly approved block.
Name Type Description
Block ID ByteArray[32] The Block ID of the parent.
Payload Length uint32 The length of the following payload in bytes. A length of 0 means no payload will be attached.
Payload optOneOf
Generic Payload
An outline of a generic payload
Name Type Description
Payload Type uint32 The type of the payload. It will instruct the node how to parse the fields that follow.
Data Fields ANY A sequence of fields, where the structure depends on Payload Type.
Nonce uint64 The nonce which lets this block fulfill the PoW requirement.

Syntactic validation

The Tangle can only contain syntactically valid blocks. Invalid blocks must be rejected by the node. The following criteria defines whether the block passes the syntactic validation:

  • The total length of the serialized block must not exceed Max Block Length.
  • Protocol Version must match the Protocol Version config parameter of the node.
  • Parents:
    • Parents Count must be at least 1 and not larger than Max Parents Count.
    • Parents must be sorted in lexicographical order.
    • Each Block ID must be unique.
  • Payload (if present):
    • Payload Type must match one of the values described under Payloads.
    • Data Fields must be correctly parsable in the context of the Payload Type.
    • The payload itself must pass syntactic validation.
  • Nonce must be valid with respect to the PoW condition described under Payloads. The PoW score itself is computed according to TIP-12.
  • There must be no trailing bytes after all block fields have been parsed.

PoW validation

The PoW that needs to be performed for each block protects the network against denial-of-service attacks where in a short time too many blocks are issued for the nodes to process. As the processing time of a block heavily depends on the contained payload, the PoW check can also depend on the Payload Type and is described under Payloads. It is important to note, that the actual parsing and validating of a payload can be computationally expensive. Thus, it is recommended to first parse the block with all its fields including Payload Type (but not parsing or validating the actual payload Data Fields). Now, simple syntactic validation steps – including PoW validation – can be performed and invalid blocks already filtered out before the payload is validated. With this approach, payload-based PoW validation is not significantly more expensive than payload-agnostic validation.

Payloads

While blocks without a payload, i.e. Payload Length set to zero, are valid, such blocks do not contain any information. As such, blocks usually contain a payload. The detailed specification of each payload type is out of scope of this TIP. The following table lists all currently specified payloads that can be part of a block and links to their specification:

Payload NameType ValueTIPPoW Condition
No Payload--PoW score ≥ Min PoW Score
Tagged Data5TIP-23PoW score ≥ Min PoW Score
Transaction6TIP-20PoW score ≥ Min PoW Score
Milestone7TIP-29nonce = 0x0000000000000000

Example

Below is the full serialization of a valid block with a Tagged Data Payload. The tag is the "IOTA" ASCII string and the data is the "hello world" ASCII string. Bytes are expressed as hexadecimal numbers.

  • Protocol Version (1-byte): 02 (2)
  • Parents Count (1-byte): 02 (2)
  • Parents (64-byte):
    • 210fc7bb818639ac48a4c6afa2f1581a8b9525e20fda68927f2b2ff836f73578
    • db0fa54c29f7fd928d92ca43f193dee47f591549f597a811c8fa67ab031ebd9c
  • Payload Length (4-byte): 18000000 (24)
  • Payload (24-byte):
    • Payload Type (4-byte): 05000000 (5)
    • Tag (5-byte):
      • Length (1-byte): 04 (4)
      • Tag (4-byte): 494f5441 ("IOTA")
    • Data (15-byte):
      • Length (4-byte): 0b000000 (11)
      • Data (11-byte): 68656c6c6f20776f726c64 ("hello world")
  • Nonce (8-byte): ce6d000000000000 (28110)

Rationale and alternatives

Instead of creating a layered approach, we could have simply created a flat transaction block that is tailored to mutate the ledger state, and try to fit all the use cases there. For example, with the tagged data use case, we could have filled some section of the transaction with that particular data. Then, this transaction would not correspond to a ledger mutation but instead only carry data.

This approach seems less extensible. It might have made sense if we had wanted to build a protocol that is just for ledger mutating transactions, but we want to be able to extend the protocol to do more than that.

Copyright

Copyright and related rights waived via CC0.

tip: 25
title: Core REST API
description: Node Core REST API routes and objects in OpenAPI Specification
author: Samuel Rufinatscha (@rufsam) , Levente Pap (@lzpap) 
discussions-to: https://github.com/iotaledger/tips/pull/27, https://github.com/iotaledger/tips/discussions/53, https://github.com/iotaledger/tips/pull/57
status: Active
type: Standards
layer: Interface
replaces: 13
created: 2022-01-26

Summary

This document proposes the core REST API for nodes supporting the IOTA protocol.

API

The API is described using the OpenAPI Specification:

Swagger Editor

Copyright

Copyright and related rights waived via CC0.

tip: 26
title: UTXO Indexer API
description: UTXO Indexer REST API routes and objects in OpenAPI Specification
author: Levente Pap (@lzpap) 
discussions-to: https://github.com/iotaledger/tips/pull/62, https://github.com/iotaledger/tips/discussions/53
status: Active
type: Standards
layer: Interface
created: 2022-01-27

Abstract

The IOTA UTXO Indexer API defines the standard REST API interface that need to be fulfilled by indexer node plugins or standalone indexer applications.

The purpose of the UTXO Indexer API is to:

  • Provide access to structured, indexed records (outputs) of the UTXO ledger of the Tangle,
  • Support client queries on the structured data to fetch outputs of interest,
  • Offload network critical nodes from having to run the indexer application.

Motivation

TIP-18 introduces new output types into the IOTA UTXO ledger. These new outputs support a variety of new features such as different unlocking conditions and feature blocks.

The indexer API makes it possible for clients to retrieve outputs based on present features, furthermore to filter them with more complex queries.

The main client application the API is designed for are wallets, but it does not mean that other applications could not use or extend it.

Specification

The API is described using the OpenAPI Specification:

Swagger Editor

Rationale

This discussion gives a good overview of why the core and indexer APIs are separated. In short, indexing the ledger is considered to be a L2 application, and as such, it is not a mandatory part of IOTA core node implementations.

Alternatively, all indexing could be baked into the core software that would require us to factor in the "cost" of indexing into byte cost of outputs, resulting in higher minimal dust deposit requirements. Network nodes that do not interact with clients but are the backbone of the network would also have to perform indexing tasks for no reason.

The new architecture also opens up the door for developing more advanced indexing applications detached from node implementations.

Backwards Compatibility

Some routes from the previous REST API (TIP-13) are removed and are supported in the new indexer API. For more details, browse TIP-25.

Reference Implementation

Hornet reference implementation:

  • https://github.com/gohornet/inx-indexer

Copyright and related rights waived via CC0.

tip: 27
title: IOTA NFT Standard IRC27
description: Define NFT standard and creator royalties
author: Adam Eunson (@AdamCroply) , Merul Dhiman (@coodos) 
discussions-to: https://github.com/iotaledger/tips/discussions/59
status: Active
type: Standards
layer: IRC
created: 2022-03-04

IOTA NFT Standard - IRC27

Abstract

IRC27 is a series of standards to support interoperable and universal NFT systems throughout the IOTA ecosystem, to provide a more robust and secure system for creators and buyers.

Introduction

This document aims to support a universal system that can provide dApp developers and creators with an interoperable foundation of standards to support ease-of-adoption and a connected NFT ecosystem. To bring value, security, trust, and interoperability.

Focusing on the primary use case for digital assets as NFTs, this defined standard supports key aspects in the creation, trade, and exchange of different assets with a focus on image, video, audio, and 3d asset file types.

To support an easy-to-implement system the IOTA NFT Standard supports:

  • Collection ID system should define NFT origins by issuerId and collectionId for authenticity and verification within the IOTA NFT space.
  • Creator Royalty System that can support universal creator royalties throughout the ecosystem.
  • NFT Schema Standard allowing for easily definable keys and values for ease-of-integration.
  • Version Modelling to allow for easy updates as well as backwards compatibility.
  • Modular System Design to provide developers freedom to utilise only required elements, as well as for future standards expansion beyond the existing standard model.

The standard provides the foundation for future expansion, supporting a modular design to provide an opportunity for selective integration, as well as further use case expansion through additional modules as time goes by.

Motivation

Why IOTA NFT Standards?

Non-Standardised NFT systems have caused concerns and issues across a number of areas in other ecosystems. The lack of interoperable standards present numerous awkward and complicated experiences and in some ecosystems has resulted in countless verification and API issues that segment the NFT community.

Early safeguards are possible to put in place to support a more secure and interoperable ecosystem that puts creators and buyer security at the forefront, providing developers and dApp makers the grounds to build a more connected and consistent ecosystem.

With the IOTA Tokenization Framework in its infancy, the early adoption of an IOTA NFT Standard can support a safer, more secure environment for creators and dApp providers, to allow an easily interoperable experience through-out the IOTA ecosystem.

In this document we will present the IOTA NFT Standard - IRC27.

Specification

Collection ID

The IOTA Tokenization Framework allows for a unique and robust solution when defining the identity of a collection. The integration of such a system can support verification of the origins of the creation of an NFT. For example, an artist creating a collection of works that will be labelled under a single collection. This allows for ease of verification for buyers and 3rd party application developers to provide an easily observable system of authenticity for users navigating the IOTA NFT space.

The standard is defined utilising the creation mechanism for NFTs.

issuerId (referred to as Issuer Block in TIP-18) is already defined in the framework, allowing every NFT created from the same source to be easily defined.

Each NFT in the IOTA Tokenization Framework has its own unique address, that allows the ability to define a collection UTXO that can subsequently mint each unique NFT within that collection.

The nftId of a collection NFT is defined as the collectionId.

The collectionId will act as a unique identifier for the collection and would allow the collectionNft to control NFT creation in a collection. This allows for unprecedented amounts of control where you can lock NFT creation in a collection for some time. It also allows for the ability to transfer the collectionNft (parent NFT of all the NFTs minted within a collection) on transfer of which the new holder will be able to add NFTs to the collection, gaining control over ownership of the collection brand, but also with the ability to lock the collection by the destruction of the collection NFT.

A creator should define the UTXO of their collection NFT as the sole minting source of an entire collection that is the collectionId.

A creator may choose to burn the collection NFT on completion of minting or retain the collection NFT to add further NFTs to the collection over time.

The UTXO of the collection NFT, nftId, acts as the collectionId for the collection and can be used in dApps to define the verified origin of an NFT.

To call the defined collectionId you should request the collectionId UTXO of the collection NFT and resolve to the Collection Registry for human identifiable verification.

To better serve the ecosystem with a single point of record for registered collections one possible Public Token Registry is defined in IOTA Public Token Registry - TIP 33 where further reading can be found.

It is important to note that several token registries may coexist in the future, TIP-27 only defines the data structure of NFT metadata.It is up to the registry to decide what criteria and method to use to verify and accept submissions.

Creator Royalties

A system to support interoperable royalty payments across dApps. Allowing universal secondary market reward systems to be integrated through-out the ecosystem. Integration of such systems is at the choosing of the dApp developer but is encouraged to support creator royalties.

Royalty addresses may be defined under the royalties key within the NFT metadata.

  • The key inside the royalties object must be a valid iota1/smr1 address where royalties will be sent to.
  • The value must be a numeric decimal representative of the percentage required ie. 0.05=5%
{
  ...
  "royalties": {
    "iota1...a": 0.05
  }
}

In the event there are further royalty, multiple royalty addresses could be used in the form of an object where the address and percentage are mapped in a key value format inside the royalties field.

{
  ...
  "royalties": {
    "iota1...a": 0.025,
    "iota1...b": 0.025,
    "iota1...c": 0.025,
    "iota1...d": 0.025
  }
}

The total decimal sum of all royaltyPercentage must never exceed 1 and is recommended not to exceed 0.5.

If royalties exists, it is iterated over the keys and then all the royalties are paid out till there are no keys left to iterate over.

NFT Schema

For ease of development and interoperability between applications within the ecosystem an extendable schema standard is defined to support ease of integration of NFTs for developers.

Each schema is defined by three main keys:

  • standard – the standard model
  • schema – the defined schema type
  • version – the version of the standard

Universal schema Each NFT schema should consist of a collection of universal keys to define key properties of the NFT

The standard defined is:

  • IRC27

The schema type is defined as a MIME type, for example:

  • Image files: image/jpeg, image/png, image/gif, etc.
  • Video files: video/x-msvideo (avi), video/mp4, video/mpeg, etc.
  • Audio files: audio/mpeg, audio/wav, etc.
  • 3D Assets: model/obj, model/u3d, etc.
  • Documents: application/pdf, text/plain, etc.

You may find all common MIME types in IANA's registry. Custom file types might define their own MIME types.

The version is defined by the version number used preceded with the letter v, current version:

  • v1.0

Define the standard, the type, and the version:


{
  "standard": "IRC27",
  "type": "image/jpeg",
  "version": "v1.0"
}

Additional keys that must be included in every NFT schema:

  • uri – url pointing to the NFT file location with MIME type defined in type.
  • name - alphanumeric text string defining the human identifiable name for the NFT
{
  "standard": "IRC27",
  "version": "v1.0",
  "type": "image/jpeg",
  "uri": "https://mywebsite.com/my-nft-files-1.jpeg",
  "name": "My NFT #0001"
}

Optional, but recommended keys, that may be included in NFT schema include:

  • collectionName – alphanumeric text string defining the human identifiable collection name
  • royalties - Object containing key value pair where payment address mapped to the payout percentage
  • issuerName – alphanumeric text string to define the human identifiable name of the creator
  • description – alphanumeric text string to define a basic description of the NFT
  • attributes - Array objects defining additional attributes of the NFT
{
  "standard": "IRC27",
  "version": "v1.0",
  "type": "image/jpeg",
  "uri": "https://mywebsite.com/my-nft-files-1.jpeg",
  "name": "My NFT #0001",
  "collectionName": "My Collection of Art",
  "royalties": {
    "iota1...a": 0.025,
    "iota1...b": 0.025
  },
  "issuerName": "My Artist Name",
  "description": "A little information about my NFT collection"
}

In addition to the required and recommended schema, the inclusion of attributes allows for versatile expansion for NFT metadata.

attributes are the attributes for the item, which will show up on dApps like NFT Marketplaces.

IRC27 NFT metadata follows the OpenSea metadata standards.

{
  "standard": "IRC27",
  "version": "v1.0",
  "type": "image/jpeg",
  "uri": "https://mywebsite.com/my-nft-files-1.jpeg",
  "name": "My NFT #0001",
  "collectionName": "My Collection of Art",
  "royalties": {
    "iota1...a": 0.025,
    "iota1...b": 0.025
  },
  "issuerName": "My Artist Name",
  "description": "A little information about my NFT collection"
  "attributes": [
    {
      "trait_type": "Background",
      "value": "Purple"
    },
    {
      "trait_type": "Element",
      "value": "Water"
    },
    {
      "trait_type": "Attack",
      "value": "150"
    },
    {
      "trait_type": "Health",
      "value": "500"
    }
  ]
}

Practical example

How does this all work for L1 NFT collections in the IOTA Tangle? Best to explain with an example.

In Stardust L1 NFTs are represented as outputs in the ledger. Each NFT output has the following properties:

  • (mandatory) nftId: a unique identifier of the NFT output assigned by the protocol upon minting
  • (mandatory) owner: address in Address Unlock Condition that is allowed to unlock the NFT output
  • (optional) immutableIssuer: an address (can be alias/nft address too) that minted the NFT output
  • (optional) immutableMetadata: binary blob of data defined upon minting by the issuer
  • (optional) sender: defines an address that transferred the nft to the current owner
  • (optional) mutableMetadata: binary blob defined by the last sender

To host metadata about the NFT, the immutableMetadata field of the output should be used, as the mutable one may be changed by the current owner. Storing metadata in an output increases the storage deposit requirement of the output, but then no additional off-chain metadata storage solution is required.

NFTs may be standalone assets, but often they are part of a collection. In EVM based NFT platforms collections are represented as a single contract that manages the collection and keeps the ownership record of the NFTs within collection. Since L1 NFTs in IOTA are represented as UTXOs in the ledger rather than smart contracts, minting a collection is conceptually different.

Minting L1 NFT Collections

The idea is to tie NFTs (individual UTXOs) within one collection together via the immutableIssuer property. The collection itself is represented by a special NFT output, the Collection NFT. It holds information about the properties of the collection and when included in transactions, it can mint the NFTs within the collection where immutableIssuer becomes the nftId of the Collection NFT.

It is possible to timelock the Collection NFT on protocol level to prevent minting of the NFTs for a certain time period. It is also possible send the Collection NFT to the zero address, or burn it al together, which essentially means that the collection is locked forever, no more collection items can ever be minted. It is not possible to define on protocol level how many items can the Collection NFT can mint, unless it is deposited into a L2 smart contract that manages issuer rights.

Let's look at a practical example on how the process looks like.

1. Minting the Collection NFT

The issuer mints an NFT output on L1 with the following properties:

  • nftId: a unique identifier of the NFT output assigned by the protocol upon minting. This will become the collectionId.
  • immutableIssuer: L1 address of the minting artist. Can be used to prove authenticity of the collection.
  • immutableMetadata: binary blob of data defined upon minting by the issuer. This will become the collectionMetadata.

The issuer of the Collection NFT defines the collection metadata according to IRC-27 standard. In our example, collectionMetadata is a JSON object:

{
  "standard": "IRC27",
  "version": "v1.0",
  "type": "text/html",
  "uri": "https://my-awesome-nft-project.com",
  "name": "My Awesome NFT Collection",
  "issuerName": "Me"
}
  • The binary blob of this JSON object is put in the immutableMetadata field of the NFT output.

2. Minting NFTs within the collection

The issuer includes the Collection NFT in a transaction that mints NFTs within the collection. The number of mintable NFTs in one transaction is defined by the protocol as the Max Outputs Count defined in TIP-22 - 1 (the Collection NFT is also part of the outputs).

The minted NFTs will have the following properties:

  • nftId: the unique identifier of the NFT output assigned by the protocol upon minting.
  • immutableIssuer: is set as collectionId of the Collection NFT. This unique value identifies which collection the NFT belongs to.
  • immutableMetadata: metadata for the individual NFT. Binary blob of an IRC-27 compliant JSON object.

For item 4 for example, the metadata is:

{
  "standard": "IRC27",
  "version": "v1.0",
  "type": "image/gif",
  "uri": "https://my-awesome-nft-project.com/item-4.gif",
  "name": "#4 My Awesome NFT",
  "issuerName": "Me",
  "royalties": {
    "smr1q5948....": 0.05
  },
  "collectionName": "My Awesome NFT Collection",
  "attributes": [
    {
      "trait_type": "awesomeness",
      "value": 60
    }
]
}

In case the metadata is not stored in the NFT output but is hosted off-chain, it is still recommended to include information in the immutableMetadata field such that clients are able to locate it. The immutableMetadata json object would look something like:

{
"standard": "IRC27",
"version": "v1.0",
"type": "application/json",
"uri": "https://my-awesome-nft-project.com/{id}.json",
"name": "#4 My Awesome NFT"
}
  • Note the {id} substitution in the uri field. Clients should replace id with the nftId property of the NFT output, without the 0x prefix.

3. Fetching content of a collection

Since each NFT within the collection has the immutableIssuer set as the collectionId, it is possible to query the UTXO Indexer API (TIP-26) to get the outputIds for all NFTs that belong to the collection. Then the output objects can be fetched via the Core API (TIP-25):

  • GET <node-url>/api/indexer/v1/outputs/nft?issuer=collectionId returns the outputIds that have collectionId in the issuer field.
  • GET <node-url>/api/core/v2/outputs/{outputId} returns the NFT output itself.

The Collection NFT can be fetched via:

  • GET <node-url>/api/indexer/v1/outputs/nfts/{collectionId} returns the outputId of the Collection NFT (if not burnt).
  • GET <node-url>/api/core/v2/outputs/{outputId} returns the Collection NFT output itself.

Rationale

Interoperable Standards

For a unified IOTA NFT ecosystem the standards have been designed to support ease of integration and cross-compatibility of NFTs throughout the IOTA network. Observations of undefined standards in other ecosystems has illustrated the importance of such developments in the early stages of the technology. Simple defined keys such as uri, instead of nftUrl or fileLocation can support a much more interoperable experience for creators and dApp developers with everyone working from the same foundations.

Supporting creators is also a key element in driving adoption for the technology, royalty integrations vary substantially in other blockchain ecosystems which remains a challenge for both 3rd party applications and creators in sustaining a consistent and reliable ecosystem across different applications.

This standard also supports expansion, backwards compatibility, and a universal guideline for the ecosystem to develop with. Allowing an immediate interoperable environment that can support ease-of-adoption in the early stages of IOTA NFTs, whilst continually supporting feature expansion and future development.

Backwards Compatibility

Versioning

Expanding use-cases in the NFT space will present multiple requirements for different standards and schemas to be developed and over time alterations and updates will be presented to support an evolving technology and future developments.

Version is introduced from the start to allow dApp developers and creators to maintain backwards compatibility with differing versions of the standard, which can be defined as a numeric value proceeded with the letter v. All future versions will be submitted as separate TIPs.

Current version v1.0

Modular Structure Expansion

A modular structure to the standard has been created to support use case expansion, file type extension, standards catalogue. Allowing creators to utilise minimalist implementations as well as the more advanced expanded standards.

Copyright and related rights waived via CC0.

tip: 28
title: Event API
description: Node event API definitions in AsyncAPI Specification
author: Luca Moser (@luca-moser) , Levente Pap (@lzpap) 
discussions-to: https://github.com/iotaledger/tips/pull/33, https://github.com/iotaledger/tips/pull/66
status: Active
type: Standards
layer: Interface
created: 2022-03-02
replaces: 16

Abstract

This proposal describes the MQTT based Node Event API for IOTA nodes. Clients may subscribe to topics provided by the node, that acts as the message publisher and broker at the same time.

Motivation

The event API makes it possible for clients to implement event-based architectures as opposed to polling supported by the REST API defined in draft TIP-25.

The event-based architecture should be of great benefit to:

  • wallets monitoring status of submitted blocks or transactions,
  • explorers displaying the evolution of the Tangle and ledger state,
  • archivers documenting the history of the Tangle.

Specification

The API is described using the AsyncAPI Specification:

AsyncAPI Editor

Rationale

  • MQTT is a lightweight protocol that is good at minimizing bandwidth and ensuring message delivery via Quality of Service.
  • It may run on resource constrained devices as well and works on top of TCP/IP protocol.
  • The publish-subscribe model makes information dissemination effective to interested clients only.

Backwards Compatibility

The previously employed event API described in TIP-16 is not backwards compatible with the current proposal, therefore versioning is introduced in the access URL of the API.

The response models are shared between the REST API and the event API.

The access route of the message broker should be updated to:

  • {nodeURL}/api/mqtt/v1

Reference Implementation

Broker

  • https://github.com/gohornet/inx-mqtt

Client

  • Go: https://github.com/iotaledger/iota.go/blob/develop/nodeclient/event_api_client.go
  • Rust: https://github.com/iotaledger/iota.rs/tree/develop/client/src/node_api/mqtt
  • TypeScript: https://github.com/iotaledger/iota.js/tree/feat/stardust/packages/mqtt

Copyright and related rights waived via CC0.

tip: 29
title: Milestone Payload
description: Coordinator issued milestone payload with Ed25519 authentication
author: Angelo Capossele (@capossele) , Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/69
status: Active
type: Standards
layer: Core
created: 2022-03-25
replaces: 8

Abstract

In IOTA, nodes use the milestones issued by the Coordinator to reach a consensus on which transactions are confirmed. This TIP proposes a milestone payload for the blocks described in the IOTA protocol TIP-24. It uses Edwards-curve Digital Signature Algorithm (EdDSA) to authenticate the milestones.

Motivation

In order to integrate the concept of milestones consistently into Tangle blocks, this TIP describes a dedicated payload type for milestones. In this context, the document also describes how Ed25519 signatures are used to assure authenticity of the issued milestones. In order to make the management and security of the used private keys easier, simple multisignature features with support for key rotation have been added.

Specification

The BLAKE2b-256 hash of the Milestone Essence, consisting of the actual milestone information (like its index number or position in the tangle), is signed using the Ed25519 signature scheme as described in the IRTF RFC 8032.

To increase the security of the design, a milestone can (optionally) be independently signed by multiple keys at once. These keys should be operated by detached signature provider services running on independent infrastructure elements. This assists in mitigating the risk of an attacker having access to all the key material necessary for forging milestones. While the Coordinator takes responsibility for forming Milestone Payload Blocks, it delegates signing in to these providers through an ad-hoc RPC connector. Mutual authentication should be enforced between the Coordinator and the signature providers: a client-authenticated TLS handshake scheme is advisable. To increase the flexibility of the mechanism, nodes can be configured to require a quorum of valid signatures to consider a milestone as genuine.

In addition, a key rotation policy can also be enforced by limiting key validity to certain milestone intervals. Accordingly, nodes need to know which public keys are applicable for which milestone index. This can be provided by configuring a list of entries consisting of the following fields:

  • Index Range providing the interval of milestone indices for which this entry is valid. The interval must not overlap with any other entry.
  • Applicable Public Keys defining the set of valid public keys.
  • Signature Threshold specifying the minimum number of valid signatures. Must be at least one and not greater than the number of Applicable Public Keys.

Milestone ID

The Milestone ID is the BLAKE2b-256 hash of the serialized Milestone Essence. It is important to note that the signatures do not impact the Milestone ID.

Structure

Serialized Layout

All values are serialized in little-endian encoding. The serialized form of the milestone is deterministic, meaning the same logical milestone always results in the same serialized byte sequence.

The following table describes the entirety of a Milestone Payload in its serialized form following the notation from TIP-21:

Name Type Description
Payload Type uint32 Set to value 7 to denote a Milestone Payload.
Essence oneOf
Milestone Essence
Describes the signed part of a Milestone Payload.
Name Type Description
Index Number uint32 The index number of the milestone.
Timestamp uint32 The Unix time (seconds since Unix epoch) at which the milestone was issued.
Protocol Version uint8 The protocol version of the Milestone Payload and its block.
Previous Milestone ID ByteArray[32] The Milestone ID of the milestone with Index Number - 1.
Parents Count uint8 The number of parents referenced by this milestone.
Parents anyOf
Parent
A block that is directly referenced by this milestone.
Name Type Description
Block ID ByteArray[32] The Block ID of the parent.
Inclusion Merkle Root ByteArray[32] The Merkle tree hash (BLAKE2b-256) of the Block IDs of all blocks included by this milestone.
Applied Merkle Root ByteArray[32] The Merkle tree hash (BLAKE2b-256) of the Block IDs of all blocks applied by this milestone that contain a state-mutating transaction¹
Metadata (uint16)ByteArray Binary data only relevant to the milestone issuer, e.g. internal state. A leading uint16 denotes its length.
Options Count uint8 The number of milestone options following.
Options atMostOneOfEach
Receipts Milestone Option
Defines UTXOs for newly migrated funds.
Protocol Parameters Milestone Option
Defines dynamic changes to the protocol parameters.
Signatures Count uint8 The number of signature entries following.
Signatures anyOf
Ed25519 Signature
Name Type Description
Signature Type uint8 Set to value 0 to denote an Ed25519 Signature.
Public Key ByteArray[32] The Ed25519 public key of the signature.
Signature ByteArray[64] The Ed25519 signature signing the BLAKE2b-256 hash of the serialized Milestone Essence.

¹: See TIP-4.

Milestone options

The Options field holds additional data authenticated by the milestone.

The following table lists all the Milestone Option Type that are currently supported as well as links to the corresponding specification:

Payload NameType ValueTIP
Receipt0TIP-34
Protocol Parameters1TIP-29

Protocol Parameters Milestone Option

This Milestone Option is used to signal to nodes the commencing of new protocol parameters, including new protocol version or PoW difficulty.

Protocol Parameters Milestone Option
Defines changing protocol parameters.
Name Type Description
Milestone Option Type uint8 Set to value 1 to denote a Protocol Parameters Milestone Option.
Target Milestone Index uint32 The milestone index at which these protocol parameters become active.
Protocol Version uint8 The to be applied protocol version.
Parameters (uint16)ByteArray The protocol parameters in binary, serialized form.
Syntactic Validation
  • Target Milestone Index must be greater than Index Number of the milestone it is contained in.
  • Target Milestone Index must be less than or equal to Index Number + 30. (This value is fixed and technically not a protocol parameter as defined in TIP-22, as it should not be subject to protocol parameter changes induced by this option.)
  • length(Parameters) must not exceed Max Metadata Length.

Milestone Validation

Similar to transaction validation, milestone validation has been separated into two classes. For a milestone to be valid, both of them need to be true.

Syntactic validation

Syntactic validation can be checked from the Milestone Essence plus the blocks in the past cone referenced by it.

  • Essence:
    • Index Number must not be smaller than First Milestone Index.
    • If Index Number equals First Milestone Index, the following fields must be zeroed out:
      • Previous Milestone ID
      • Inclusion Merkle Root
      • Applied Merkle Root
    • If Index Number is greater than First Milestone Index, the milestone must reference (i.e. one of the Parents must contain or reference) another syntactically valid milestone whose Milestone ID matches Previous Milestone ID. With respect to that referenced milestone, the following must hold:
      • Index Number must increment by 1.
      • Timestamp must be strictly larger (i.e. at least one second later).
      • Inclusion Merkle Root must match the Merkle tree hash of the IDs of all blocks in White Flag Ordering (as described in TIP-2) that are newly referenced. (This always includes at least one valid milestone block with Previous Milestone ID.)
      • Applied Merkle Root must match the Merkle tree hash of the not-ignored state-mutating transactions that are newly referenced (see TIP-2).
    • Parents must match the Parents field of the encapsulating Block, i.e. the Block that contains the Milestone Payload.
    • length(Metadata) must not exceed Max Metadata Length.
    • Options:
      • Milestone Option Type must match one of the values described under Milestone Options.
      • The option itself must pass syntactic validation.
      • The options must be sorted in ascending order based on their Milestone Option Type.
  • Signatures:
    • Signatures Count must be at least the Signature Threshold and at most the number of Applicable Public Keys for the current milestone index.
    • For each signature block the following must be true:
      • Signature Type value must denote an Ed25519 Signature.
      • Public Key must be contained in Applicable Public Keys for the current milestone index.
      • Signature must contain a valid signature for Public Key.
    • The signature blocks must be sorted with respect to their Public Key in lexicographical order.
    • Each Public Key must be unique.
  • Given the type and length information, the Milestone Payload must consume the entire byte array of the Payload field of the Block.

Semantic validation

Semantic validation is defined in the context of all available blocks.

  • The milestone chain must not fork, i.e. there must not be two different, syntactically valid milestones with the same Index Number. In case of a fork, the correct state of the ledger cannot be derived from the milestones alone and usually the node implementation should alert the user and halt.

Rationale

  • Due to the layered design of blocks and payloads, it is practically not possible to prevent reattachments of Milestone Payloads. Hence, this payload has been designed in a way to be independent from the block it is contained in. A milestone should be considered as a virtual marker (referencing Parents) rather than an actual block in the Tangle. This concept is compatible with reattachments and supports a cleaner separation of the block layers.
  • Forcing matching Parents in the Milestone Payload and its block makes it impossible to reattach the same payload at different positions in the Tangle. Strictly speaking, this violates the separation of payload and block. However, it simplifies milestone processing as the position of the block will be the same as the position encoded in the Milestone Payload. Having these clear structural properties seems to be more desirable than a strict separation of layers.
  • While it is always possible to cryptographically prove that a block was confirmed by a given milestone by supplying all the blocks of a path from the milestone to the block, such a proof can become rather large (depending on the blocks). To simplify such proof-of-inclusions, the Inclusion Merkle Root of all the included blocks has been added.

Copyright

Copyright and related rights waived via CC0.

tip: 30
title: Native Token Metadata JSON Schema
description: A JSON schema that describes token metadata format for native token foundries.
author: Levente Pap @lzpap 
discussions-to: https://github.com/iotaledger/tips/pull/68
status: Active
type: Standards
layer: IRC
created: 2022-03-25
requires: 18

Abstract

This TIP describes a JSON schema to store native token metadata on-chain in foundry outputs.

Motivation

By introducing a standardized token metadata schema we aim to address the following problems:

  • Storing structured token metadata on-chain,
  • Interoperability of dApps, wallets and clients handling native tokens,
  • Creating the possibility of off-chain token verification based on social consensus.

Specification

Native tokens are user defined tokens controlled by foundries, as described in TIP-18. Each native token is identified by its 38 bytes long Token ID, that also identifies the unique identifier of the foundry, Foundry ID.

Given Foundry ID, the most recent unspent foundry output controlling the supply of the native token can be fetched via the UTXO indexer API defined in draft TIP 26.

The foundry output may contain an immutable Metadata Feature that holds raw binary data. By encoding metadata in JSON format adhering to the JSON schema defined in this TIP and placing it in the immutable Metadata Feature of a foundry output, issuers can supply metadata to wallets, dApps and clients on-tangle, without the need for a metadata server.

Standardizing the JSON schema for token metadata plays an important role in establishing interoperability of decentralized applications and wallets.

JSON Schema

The proposed JSON schema is located here:

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "$id": "https://github.com/iotaledger/tips/main/tips/TIP-0030/irc30.schema.json",
  "title": "IRC30 Native Token Metadata Schema",
  "description": "A JSON schema for IRC30 compliant native token metadata",
  "type": "object",
  "properties": {
    "standard": {
      "description": "The IRC standard of the token metadata",
      "type": "string",
      "pattern": "^IRC30$"
    },
    "name": {
      "description": "The human-readable name of the native token",
      "type": "string"
    },
    "description": {
      "description": "The human-readable description of the token",
      "type": "string"
    },
    "symbol": {
      "description": "The symbol/ticker of the token",
      "type": "string"
    },
    "decimals": {
      "description": "Number of decimals the token uses (divide the token amount by 10^decimals to get its user representation)",
      "type": "integer",
      "minimum": 0
    },
    "url": {
      "description": "URL pointing to more resources about the token",
      "type": "string"
    },
    "logoUrl": {
      "description": "URL pointing to an image resource of the token logo",
      "type": "string"
    },
    "logo": {
      "description": "The svg logo of the token encoded as a byte string",
      "type": "string"
    }
  },
  "required": [
    "standard",
    "name",
    "symbol",
    "decimals"
  ]
}

Examples

The following examples are located in the examples/ folder.

To try the schema validation in Python, install jsonschema package by running:

pip install jsonschema

Then navigate into the folder of this TIP (tips/TIP-0030/) of the cloned TIP repository and run the validation in console:

jsonschema -i examples/1-valid.json irc30.schema.json

If the validation fails, error messages are printed out to the console.

1. A minimum valid token metadata JSON

{
  "standard": "IRC30",
  "name": "FooCoin",
  "symbol": "FOO",
  "decimals": 3,
}

2. A more descriptive valid token metadata JSON

{
  "standard": "IRC30",
  "name": "FooCoin",
  "description": "FooCoin is the utility and governance token of FooLand, a revolutionary protocol in the play-to-earn crypto gaming field.",
  "symbol": "FOO",
  "decimals": 3,
  "url": "https://foocoin.io",
  "logoUrl": "https://ipfs.io/ipfs/QmR36VFfo1hH2RAwVs4zVJ5btkopGip5cW7ydY4jUQBrkR"
}

3. Invalid token metadata

{
  "standard": "IRC27",
  "name": "FooCoin",
  "description": "FooCoin is the utility and governance token of FooLand, a revolutionary protocol in the play-to-earn crypto gaming field.",
  "decimals": 0.5
}

The metadata JSON is not a valid IRC30 token metadata JSON as:

  • The standard field is not IRC30
  • symbol property is missing, although it is required, and
  • decimals is not an integer.

Rationale

The main motive of this design is to allow interoperability of applications handling native tokens while also leaving room for optional, non-required fields that might be needed for certain use-cases.

Alternatively, a non-standardized token metadata structure would lead to a fragmented application space and hence worse developer and user experiences while interacting with the network.

Backwards Compatibility

IRC30 aims to be a minimum standard that can be compatible with future token standards, as long as the few originally required fields are respected.

Copyright and related rights waived via CC0.

tip: 31
title: Bech32 Address Format
description: Extendable address format supporting various signature schemes and address types
author: Wolfgang Welz (@Wollac) , Levente Pap (@lzpap) 
discussions-to: https://github.com/iotaledger/tips/pull/20
status: Active
type: Standards
layer: Interface
created: 2022-04-04
replaces: 11

Summary

This document proposes an extendable address format for the IOTA protocol supporting various signature schemes and address types. It relies on the Bech32 format to provide a compact, human-readable encoding with strong error correction guarantees.

Motivation

With Chrysalis, IOTA started using Ed25519 to generate digital signatures, in which addresses correspond to a BLAKE2b-256 hash. It is necessary to define a universal and extendable address format capable of encoding different types of addresses (introduced also in TIP-18).

The legacy IOTA protocol (1.0, pre-Chrysalis) relies on Base27 addresses with a truncated Kerl checksum. However, both the character set and the checksum algorithm have limitations:

  • Base27 is designed for ternary and is ill-suited for binary data.
  • The Kerl hash function also requires ternary input. Further, it is slow and provides no error-detection guarantees.
  • It does not support the addition of version or type information to distinguish between different kinds of addresses with the same length.

All of these points are addressed in the Bech32 format introduced in BIP-0173: In addition to the usage of the human-friendly Base32 encoding with an optimized character set, it implements a BCH code that guarantees detection of any error affecting at most four characters and has less than a 1 in 109 chance of failing to detect more errors.

This TIP proposes a simple and extendable binary serialization for addresses of different types that is then Bech32 encoded to provide a unique appearance for human-facing applications such as wallets.

Detailed design

Binary serialization

The address format uses a simple serialization scheme which consists of two parts:

  • The first byte describes the type of the address.
  • The remaining bytes contain the type-specific raw address bytes.

Currently, only three kind of addresses are supported:

  • Ed25519, where the address consists of the BLAKE2b-256 hash of the Ed25519 public key.
  • Alias, where the address consists of the BLAKE2b-256 hash of the Output ID (defined in TIP-0020) that created the alias.
  • NFT, where the address consists of the BLAKE2b-256 hash of the Output ID (defined in TIP-0020) that created the NFT.

They are serialized as follows:

TypeFirst byteAddress bytes
Ed255190x0032 bytes: The BLAKE2b-256 hash of the Ed25519 public key.
Alias0x0832 bytes: The BLAKE2b-256 hash of the Output ID that created the alias.
NFT0x1032 bytes: The BLAKE2b-256 hash of the Output ID that created the NFT.

Bech32 for human-readable encoding

The human-readable encoding of the address is Bech32 (as described in BIP-0173). A Bech32 string is at most 90 characters long and consists of:

  • The human-readable part (HRP), which conveys the protocol and distinguishes between the different networks. HRPs are registered in SLIP-0173:
    • iota is the human-readable part for IOTA Mainnet addresses (IOTA tokens)
    • atoi is the human-readable part for IOTA Testnet/Devnet addresses
    • smr is the human-readable part for Shimmer network addresses (Shimmer tokens)
    • rms is the human-readable part for Shimmer Testnet/Devnet addresses
  • The separator, which is always 1.
  • The data part, which consists of the Base32 encoded serialized address and the 6-character checksum.

Examples

  • Ed25519 Address
    • Ed25519 public key (32-byte): 6f1581709bb7b1ef030d210db18e3b0ba1c776fba65d8cdaad05415142d189f8
    • BLAKE2b-256 hash (32-byte): efdc112efe262b304bcf379b26c31bad029f616ee3ec4aa6345a366e4c9e43a3
    • serialized (33-byte): 00efdc112efe262b304bcf379b26c31bad029f616ee3ec4aa6345a366e4c9e43a3
    • Bech32 string:
      • IOTA (64-char):iota1qrhacyfwlcnzkvzteumekfkrrwks98mpdm37cj4xx3drvmjvnep6xqgyzyx
      • IOTA Testnet (64-char): atoi1qrhacyfwlcnzkvzteumekfkrrwks98mpdm37cj4xx3drvmjvnep6x8x4r7t
      • Shimmer (63-char): smr1qrhacyfwlcnzkvzteumekfkrrwks98mpdm37cj4xx3drvmjvnep6xhcazjh
      • Shimmer Testnet: (63-char) rms1qrhacyfwlcnzkvzteumekfkrrwks98mpdm37cj4xx3drvmjvnep6xrlkcfw
  • Alias Address
    • Output ID (34-byte): 52fdfc072182654f163f5f0f9a621d729566c74d10037c4d7bbb0407d1e2c6490000
    • Alias ID, BLAKE2b-256 hash (32-byte): fe80c2eb7c736da2f7c98ecf135ee9e34e4e076afe6e1dfebc9ec578b8f56d2f
    • serialized (33-byte): 08fe80c2eb7c736da2f7c98ecf135ee9e34e4e076afe6e1dfebc9ec578b8f56d2f
    • Bech32 string:
      • IOTA (64-char): iota1prlgpsht03ekmghhex8v7y67a835uns8dtlxu807hj0v279c74kj76j6rev
      • IOTA Testnet (64-char): atoi1prlgpsht03ekmghhex8v7y67a835uns8dtlxu807hj0v279c74kj7autzrp
      • Shimmer (63-char): smr1prlgpsht03ekmghhex8v7y67a835uns8dtlxu807hj0v279c74kj7dzrr0a
      • Shimmer Testnet (63-char): rms1prlgpsht03ekmghhex8v7y67a835uns8dtlxu807hj0v279c74kj7e9ge5y
  • NFT Address
    • Output ID (34-byte): 97b9d84d33419199483daab1f81ddccdeff478b6ee9040cfe026c517f67757880000
    • NFT ID, BLAKE2b-256 hash (32-byte): 3159b115e27128b6db16db5e61f1aa4c70d84a99be753faa3ee70d9ad9c6a6b7
    • serialized (33-byte): 103159b115e27128b6db16db5e61f1aa4c70d84a99be753faa3ee70d9ad9c6a6b7
    • Bech32 string:
      • IOTA (64-char): iota1zqc4nvg4ufcj3dkmzmd4uc034fx8pkz2nxl820a28mnsmxkec6ntw0vklm7
      • IOTA Testnet (64-char): atoi1zqc4nvg4ufcj3dkmzmd4uc034fx8pkz2nxl820a28mnsmxkec6ntwgz87pn
      • Shimmer (63-char): smr1zqc4nvg4ufcj3dkmzmd4uc034fx8pkz2nxl820a28mnsmxkec6ntwcu0ld0
      • Shimmer Testnet (63-char): rms1zqc4nvg4ufcj3dkmzmd4uc034fx8pkz2nxl820a28mnsmxkec6ntwvmy9kk

Drawbacks

  • Addresses look fundamentally different from the established 81-tryte legacy IOTA addresses. However, since the switch from binary to ternary and Chrysalis in general was a substantial change, this is a very reasonable and desired consequence.
  • A four character HRP plus one type byte only leaves a maximum of 48 bytes for the actual address.

Rationale and alternatives

  • There are several ways to convert the binary serialization into a human-readable format, e.g. Base58 or hexadecimal. The Bech32 format, however, offers the best compromise between compactness and error correction guarantees. A more detailed motivation can be found in BIP-0173 Motivation.
  • The binary serialization itself must be as compact as possible while still allowing you to distinguish between different address types of the same byte length. As such, the introduction of a version byte offers support for up to 256 different kinds of addresses at only the cost of one single byte.

Reference implementation

Example Go implementation in wollac/iota-crypto-demo:

Example Go implementation in iotaledger/iota.go/v3:

Copyright

Copyright and related rights waived via CC0.

tip: 32
title: Shimmer Protocol Parameters
description: Describes the global protocol parameters for the Shimmer protocol
author: Wolfgang Welz (@Wollac) , Levente Pap (@lzpap) 
discussions-to: https://github.com/iotaledger/tips/pull/71
status: Active
type: Standards
layer: Core
created: 2022-04-04

Summary

This TIP describes the global protocol parameters for the Shimmer protocol.

Motivation

Various other protocol TIPs rely on certain constants that need to be defined for an actual implementation of nodes or other applications using the protocol. This TIP serves as a single document to provide these parameters. It also serves as a historical record of protocol parameter changes.

Detailed design

Shimmer Units

The base token of the Shimmer Network is Shimmer (SMR). SMR token representation supports 6 decimal places, therefore the smallest possible value that may occur in the network is 10-6 SMR = 0.000001 SMR. This indivisible amount is called a glow.

The concept of glow is analogous to satoshi for BTC, or wei for ETH.

Amount in glowAmount in SMR
1 glow0.000001 SMR
10 glow0.00001 SMR
100 glow0.0001 SMR
1,000 glow0.001 SMR
10,000 glow0.01 SMR
100,000 glow0.1 SMR
1,000,000 glow1 SMR
10,000,000 glow10 SMR
100,000,000 glow100 SMR
1,000,000,000 glow1,000 SMR
10,000,000,000 glow10,000 SMR
100,000,000,000 glow100,000 SMR
1,000,000,000,000 glow1,000,000 SMR

On protocol level, all token amounts are recorded in glow. Outputs of a transaction specify the transfer amounts in glow. Graphical user interfaces of explorers, wallets and other tools are recommended to display transfer amounts in SMR denomination.

Genesis Supply

The Shimmer genesis supply is allocated according to the following figure:

IOTA Stakers - 80%

80% of the Shimmer genesis token supply has been distributed to IOTA token holders who staked their tokens to receive SMR during the Shimmer staking event:

Rewards are recorded in the base unit of the Shimmer network, glow. 1 SMR token equals 1,000,000 glow.

In total, 1,450,896,407,249,092 glow (~1,450,896,407 SMR) is allocated to IOTA Stakers in the Shimmer genesis snapshot.

Shimmer Community Treasury (DAO) - 10%

The IOTA community carried out an on-chain vote on the initial Shimmer Ecosystem Funding Proposal and its follow-up proposal.

The community-validated result of the vote allocated 10% of the new total Shimmer supply to a Shimmer Community Treasury DAO.

As a consequence, the genesis Shimmer snapshot allocates 181,362,050,906,137 glow (~181,362,051 SMR) to the address of the Shimmer Community Treasury:

  • smr1qrmakyqt5ezm5k9c0sk39gwfavpktxkjmx0jvh9ejxjq6pr6d39egv2mvuc

Tangle Ecosystem Association (TEA) - 10%

The Tangle Ecosystem Association was set up by the IOTA Foundation to host the new structure of the Ecosystem Development Fund and other ecosystem-focused activities.

The community-validated result of the same on-chain vote on the initial Shimmer Ecosystem Funding Proposal and its follow-up proposal allocates 10% of the total Shimmer genesis supply to the Tangle Ecosystem Association.

As a consequence, the genesis Shimmer snapshot allocates 181,362,050,906,136 glow (~181,362,051 SMR) to the address of the Tangle Ecosystem Association:

  • smr1qpjva0y6qwjmv42dspw2se2776l5qr0r5p2s5m7nrg73qhvm6a6y7uvqxn6

Global Parameters

NameValueClassTypeDescription
Network Name"shimmer"dynamicstringIdentifier string of the network. Its hash it used for the Network ID field in transactions.
Protocol Version2dynamicuint8Protocol version currently used by the network.
Max Block Length32768dynamicuint32Maximum length of a block in bytes. Limits Tangle storage size and communication costs.
Max Parents Count8dynamicuint32Maximum number of parents of a block.
Min Parents Count1dynamicuint32Maximum number of parents of a block.
Min PoW Score1500dynamicuint32Minimum PoW score for blocks to pass syntactic validation.
Below Max Depth15dynamicuint8Below Max Depth is the maximum allowed delta value between Oldest Cone Root Index (OCRI) of a given block in relation to the current Confirmed Milestone Index (CMI) before it gets lazy.
Max Shimmer Genesis Supply1813620509061365staticuint64Total amount of Shimmer genesis supply denominated in glow. 1 glow = 0.000001 SMR
Max Inputs Count128dynamicuint32Maximum number of inputs in a transaction payload.
Max Outputs Count128dynamicuint32Maximum number of outputs in a transaction payload.
Max Native Token Count64dynamicuint32Maximum number of different native tokens that can be referenced in one transaction.
Max Tag Length64dynamicuint8Maximum length of a Tag field in bytes.
Max Metadata Length8192dynamicuint16Maximum length of a Metadata field in bytes.
VByte Cost100dynamicuint32Minimum amount of Shimmer (denominated in glow) that need to be deposited per vbyte of an output.
VByte Factor Data1dynamicuint8Weight of data fields that determines the relation between actual byte size and virtual byte size.
VByte Factor Key10dynamicuint8Weight of key fields that determines the relation between actual byte size and virtual byte size.
SLIP-44 Coin Type (decimal)4219staticuint32Registered coin type (decimal) for usage in level 2 of BIP44 described in chapter "Coin type".
SLIP-44 Path Component (coin_type')0x8000107bstaticstringRegistered path component for usage in level 2 of BIP44 described in chapter "Coin type".
Bech32 Human-Readable PartsmrstaticstringHRP prefix to use for Bech32 encoded Shimmer addresses. (e.g. smr1qrhacyfwlcnzkvzteumekfkrrwks98mpdm37cj4xx3drvmjvnep6xhcazjh)
Bech32 Human-Readable Part (Test)rmsstaticstringHRP prefix to use for Bech32 encoded Shimmer addresses on test- or development networks. (e.g. rms1qrhacyfwlcnzkvzteumekfkrrwks98mpdm37cj4xx3drvmjvnep6xrlkcfw)

Rationale for parameter choices

Proof-of-work

Initially, Min PoW Score was chosen to roughly match the difficulty of a data transaction in the legacy IOTA protocol:

  • The payload length (signatureMessageFragment) of a legacy transaction is 2187 trytes or 1100 - 1300 bytes depending on the encoding.
  • With a minimum weight magnitude (trailing zero trits) of 14, this corresponds to a PoW score of about 4000.

The Stardust protocol debuts on Shimmer and as such, it is the first network capable of supporting L2 smart contract chains (ISC). One possible bottleneck of smart contract chain performance is the time it takes to compute PoW for smart contract chain anchor transactions, as these are typically bigger than the aforementioned legacy IOTA transaction size.

Accordingly, Shimmer lowers Min PoW Score to 1500 to ease the operation of smart contract chains and facilitate the development of the L2 smart contract ecosystem. This value was chosen based on calculations with block sizes for smallest possible blocks (46 byte -> minimum weight magnitude of 11) and typical ISC on-chain requests for chain deployment and chain deposits (~573 byte -> minimum weight magnitude of 13).

Transaction and block limits

The blocks parameters Max Block Length and Max Parent Count, as well as the transaction parameters Max Inputs Count, Max Outputs Count, Max Native Token Count, Max Tag Length and Max Metadata Length govern the block and transaction validity. Their values have been chosen to ensure functionality of the protocol within constrained resource restrictions. Furthermore, choosing more conservatives values here is preferable as increasing such limits can always been done preserving backward compatibility.

Dust protection

The VByte Cost is the core parameter of the dust protection. The reasoning behind its value is explained in TIP-19 Dust Protection.

Copyright

Copyright and related rights waived via CC0.

tip: 34
title: Wotsicide (Stardust)
description: Define migration from legacy W-OTS addresses to post-Chrysalis network
author: Luca Moser (@luca-moser) , Wolfgang Welz (@Wollac) 
discussions-to: https://github.com/iotaledger/tips/pull/74
status: Obsolete
type: Standards
layer: Core
created: 2022-04-21
replaces: 17

Abstract

This TIP defines the migration process of funds using the legacy Winternitz one-time signature scheme (W-OTS) to the current network.

Motivation

With Chrysalis, the IOTA protocol moved away from W-OTS as it created a number of security, protocol and UX issues:

  • W-OTS signatures are big and make up a disproportionate amount of data of a transaction.
  • It is only safe to spend from an address once. Spending multiple times from the same address reveals random parts of the private key, making any subsequent transfers (other than the first) susceptible to thefts.
  • As a prevention mechanism to stop users from spending multiple times from the same address, nodes have to keep an ever growing list of those addresses.

As the current protocol no longer supports W-OTS addresses, there needs to be a migration process from W-OTS addresses to Ed25519 addresses. To make this migration as smooth as possible, this TIP proposes a mechanism allowing users to migrate their funds at any time with only a small delay until they are available on the new network.

This TIP outlines the detailed architecture of how users will be able to migrate their funds and specifies the underlying components and their purposes.

Specification

On a high-level the migration process works as follows:

  • Users create migration bundles in the legacy network which target their Ed25519 address in the new network.
  • The Coordinator then mints those migrated funds in so-called Receipt Milestone Option which are placed within milestones on the new network.
  • Nodes in the new network evaluate receipts and book the corresponding funds by creating new UTXOs in the ledger.

Legacy network

Migration bundle

The node software no longer books ledger mutations to non-migration addresses. This means that users are incentivized to migrate their funds as they want to use their tokens. See this document on what migration addresses are.

A migration bundle is defined as follows:

  • It contains exactly one output transaction of which the destination address is a valid migration address and is positioned as the tail transaction within the bundle. The output transaction value is at least 1'000'000 tokens.
  • It does not contain any zero-value transactions which do not hold signature fragments. This means that transactions other than the tail transaction must always be part of an input.
  • Input transactions must not use migration addresses.

The node will only use tail transactions of migration or milestone bundles for the tip-pool. This means that past cones referenced by a milestone will only include such bundles.

The legacy node software is updated with an additional HTTP API command called getWhiteFlagConfirmation which given request data in the following form:

{
    "command": "getWhiteFlagConfirmation",
    "milestoneIndex": 1434593
}

returns data for the given milestone white-flag confirmation:

{
    "milestoneBundle": [
        "SDGKWKJAG...",
        "WNGHJWIFA...",
        "DSIEWSDIG..."
    ],
    "includedBundles": [
        [
            "SKRGI9DFS...",
            "NBJSKRJGW...",
            "ITRUQORTZ..."
        ],
        [
            "OTIDFJKSD...",
            "BNSUGRWER...",
            "OPRGJSDFJ..."
        ],
        ...
    ]
}

where milestoneBundle contains the milestone bundle trytes and includedBundles is an array of tryte arrays of included bundles in the same DFS order as the white-flag confirmation. Trytes within a bundle "array" are sorted from currentIndex = 0 ascending to the lastIndex.

This HTTP API command allows interested parties to verify which migration bundles were confirmed by a given milestone.

Milestone inclusion Merkle proof

The Coordinator will only include migration bundles (respectively the tails of those bundles) in its inclusion Merkle proof. Nodes which do not run with the updated code will crash.

Preventing non-migration bundles

As an additional measure to prevent users from submitting never confirming non-migration bundles (which would lead to key-reuse), nodes will no longer accept non-migration bundles in the HTTP API.

HTTP API level checks:

  • The user must submit an entire migration bundle. No more single zero-value transactions, value-spam bundles etc. are allowed.
  • Input transactions are spending the entirety of the funds residing on the corresponding address. There must be more than 0 tokens on the given address.

Wallet software must be updated to no longer support non-migration bundles.

There are no restrictions put in place on the gossip level, as it is too complex to prevent non-migration transactions to be filtered out, however, these transactions will never become part of a milestone cone.

Current network

Receipt Milestone Option

Each Milestone Essence as specified in TIP-29 can contain a Receipt Milestone Option. Receipts allow for fast migration of funds from the legacy into the new network.

Serialized layout

The following table describes the entirety of a Receipt Milestone Option in its serialized form following the notation from TIP-21:

Name Type Description
Milestone Option Type uint8 Set to value 0 to denote a Receipt Milestone Option.
Migrated At uint32 The index of the legacy milestone in which the listed funds were migrated at.
Final uint8 The value 1 indicates that this receipt is the last receipt for the given Migrated At index.
Funds Count uint16 Denotes how many migrated fund entries are within the receipt.
Funds
Migrated Funds Entry
Name Type Description
Tail Transaction Hash ByteArray[49] The t5b1 encoded tail transaction hash of the migration bundle.
Address oneOf
Ed25519 Address
Name Type Description
Address Type uint8 Set to value 0 to denote an Ed25519 Address.
PubKeyHash ByteArray[32] The raw bytes of the Ed25519 address which is a BLAKE2b-256 hash of the Ed25519 public key.
Amount uint64 The amount which was migrated
Treasury oneOf
Treasury Transaction

Validation

Syntactic validation
  • Final must be either 0 or 1.
  • Funds:
    • Funds Count must be 0 < x ≤ Max Inputs Count.
    • For each fund entry the following must be true:
      • Amount must be at least 1'000'000.
    • The fund entries must be sorted with respect to their Tail Transaction Hash in lexicographical order.
    • Each Tail Transaction Hash must be unique.
  • Treasury must be a syntactically valid Treasury Transaction as described in the Treasury Transaction section.
Semantic validation

Semantic validation is checked with respect to the previous Receipt Milestone Option, i.e. the receipt whose Milestone's Index Number is the largest but still less than the current milestone.

  • Migrated At must not be smaller than in the previous receipt.
  • If the previous receipt has Final set to 1, Migrated At must be larger than the previous.
  • The Amount of the previous Treasury Output plus the sum of all Amount fields of the current Migrated Funds Entries must equal the Amount of the current Treasury Output.
Legitimacy of migrated funds

While the syntactic and semantic validation ensure that the receipt's integrity is correct, it does not actually tell whether the given funds were really migrated in the legacy network.

In order validate this criteria, the node software performs the following operations:

  1. The HTTP API of a legacy node is queried for the Tail Transaction Hash of each Migrated Funds Entry.
  2. The node checks whether the Migrated Funds Entry matches the response from the legacy node.
  3. Additionally, if the receipt's Final flag was set to 1, it is validated whether all funds for the given legacy milestone were migrated, i.e. whether for each Migration Bundle confirmed by that milestone there exists a Migrated Funds Entry in the current or a previous receipt.

If the operation fails, the node software must gracefully terminate with an appropriate error message.

Treasury Transaction

A Treasury Transaction contains a reference to the current Treasury Output (in the form of a Treasury Input object) and a Treasury Output which deposits the remainder.

The Treasury Output cannot be referenced or spent by transactions, it can only be referenced by receipts. It can be queried from the HTTP API and needs to be included within snapshots in order to keep the total supply intact.

The following table describes the entirety of a Treasury Transaction in its serialized form following the notation from TIP-21:

Name Type Description
Input oneOf
Treasury Input
Equivalent to a normal UTXO Input, but instead of a transaction it references a milestone.
Name Type Description
Input Type uint8 Set to value 1 to denote an Treasury Input.
Milestone ID ByteArray[32] The Milestone ID of the milestone that created the referenced Treasury Output.
Output oneOf
Treasury Output
Represents the treasury of the network, i.e. the not yet migrated funds.
Name Type Description
Output Type uint8 Set to value 2 to denote an Treasury Output.
Amount uint64 The amount of funds residing in the treasury.

Booking receipts

After successful receipt validation, the node software generates UTXOs in the following form: For each Migrated Funds Entry a Basic Output (see TIP-18) is created with Amount matching the Amount field of the entry as well as a single Address Unlock Condition for the entry's Address. All other fields of the output are left empty. Normally, the Output ID corresponds to Transaction ID plus Output Index. However, as for those migrated outputs there is no corresponding creating transaction, the Milestone ID of the encapsulating milestone is used as the Transaction ID part. In this case, the Output Index corresponds to the index of the corresponding Migrated Funds Entry.

All the generated Basic Outputs are then booked into the ledger and the new Treasury Output is persisted as an UTXO using the Milestone ID of the receipt which included the Treasury Transaction payload.

Rationale

  • At the current legacy network ledger size of 261446 entries (addresses with ≥ 1'000'000 tokens), it would take at least 2058 receipts to migrate all the funds. While theoretically the Max Message Length allows for more entries to be included in one receipt, the number is limited by the fact that the index of the Migrated Funds Entry is used to generate the Output Index of the generated output. As such, the maximum number of Migrated Funds Entry should also be limited by Max Inputs Count
  • Assuming the best case scenario in which all 261446 entries are sent to migration addresses in the legacy network, these funds could therefore be migrated into the new network within ~5.7h (at a 10 second milestone interval). Of course, in practice users will migrate over time and the receipt mechanism will need to be in place as long as the new network runs.

Copyright

Copyright and related rights waived via CC0.

tip: 35
title: Local Snapshot File Format
description: File format to export and import ledger state
author: Luca Moser (@luca-moser) , Max Hase (@muXxer) 
discussions-to: https://github.com/iotaledger/tips/pull/25
status: Active
type: Standards
layer: Interface
created: 2022-05-06
replaces: 9

Summary

This TIP defines a file format for local snapshots which is compatible with Stardust. The version of the snapshot file format described in this TIP is Version 2.

Motivation

Nodes create local snapshots to produce ledger representations at a point in time of a given milestone to be able to:

  • Start up from a recent milestone instead of having to synchronize from the genesis transaction.
  • Delete old transaction data below a given milestone.

For Stardust, this file format has to be assimilated to support protocol parameters and to contain the milestone at the point of the snapshot index in order extract startup metadata from it.

Detailed design

Since a UTXO based ledger is much larger in size, this TIP proposes two formats for snapshot files:

  • A full format which represents a complete ledger state.
  • A delta format which only contains diffs (created and consumed outputs) of milestones from a given milestone index onwards.

This separation allows nodes to swiftly create new delta snapshot files, which then can be distributed with a companion full snapshot file to reconstruct a recent state.

Formats

All types are serialized in little-endian

Full Ledger State

A full ledger snapshot file contains the UTXOs (outputs section) of a node's confirmed milestone (Ledger Milestone Index). The diffs contain the diffs to rollback the outputs state to regain the ledger state of the snapshot Target Milestone Index.

While the node producing such a full ledger state snapshot could theoretically pre-compute the actual snapshot milestone state, this is deferred to the consumer of the data to speed up local snapshot creation.

Delta Ledger State

A delta ledger state local snapshot only contains the diffs of milestones starting from a given Full Snapshot Target Milestone Index. A node consuming such data must know the state of the ledger at Full Snapshot Target Milestone Index.

Schema

Output

Defines an output.

Name Type Description
Output ID Array<byte>[34] The ID of the output which is a concatenation of the transaction ID + output index.
Block ID Array<byte>[32] The ID of the Block in which the transaction was contained which generated this output.
Milestone Index Booked uint32 The milestone index at which this output was generated.
Milestone Timestamp Booked uint32 The UNIX timestamp in seconds of the milestone which produced this output.
Output Length uint32 Denotes the length of the output.
Output oneOf
BasicOutput
AliasOutput
FoundryOutput
NFTOutput
Consumed Output

Defines a consumed output.

Name Type Description
Output Array<byte>[Output length] The serialized Output (see above).
Target Transaction ID Array<byte>[32] The ID of the transaction that spent this output.
Milestone Diff

Defines the diff a milestone produced by listing the created/consumed outputs and the milestone payload itself.

Name Type Description
Milestone Diff Length uint32 Denotes the length of the milestone diff.
Milestone Payload Length uint32 Denotes the length of the milestone payload.
Milestone Payload Array<byte>[Milestone Payload length] The milestone payload in its serialized binary form.
Treasury Input
only included if milestone contains a receipt
Name Type Description
Treasury Input Milestone ID Array<byte>[32] The ID of the milestone this input references.
Treasury Input Amount uint64 The amount of this treasury input.
Created Outputs Count uint32 The amount of outputs generated with this milestone diff.
Created Outputs anyOf
Output
Consumed Outputs Count uint32 The amount of outputs consumed with this milestone diff.
Consumed Outputs anyOf
Consumed Output
Protocol Parameters Milestone Option

This Milestone Option is used to signal to nodes the commencing of new protocol parameters, including new protocol version or PoW difficulty.

Name Type Description
Milestone Option Type byte Set to value 1 to denote a Protocol Parameters Milestone Option.
Target Milestone Index uint32 Then name of the network from which this snapshot was generated from.
Protocol Version byte The to be applied protocol version.
Protocol Parameters (uint16)ByteArray The protocol parameters in binary, serialized form.
Protocol Parameters

Defines protocol parameters.

Name Type Description
Protocol Version byte The version of the protocol.
Network Name (uint8)string Then name of the network from which this snapshot was generated from.
Bech32HRP (uint8)string The human-readable part of the addresses within the network.
MinPoWScore uint32 The minimum PoW score.
BelowMaxDepth uint8 The below max depth parameter.
RentStructure
Name Type Description
VByteCost uint32 The token price per virtual byte
VBFactorData uint8 The factor to use for data fields
VBFactorKey uint8 The factor to use for indexed fields
TokenSupply uint64 The token supply.
Full snapshot file format

Defines what a full snapshot file contains.

Name Type Description
Version byte Denotes the version of this file format. (Version 2)
Type byte Denotes the type of this file format. Value 0 denotes a full snapshot.
Genesis Milestone Index uint32 The index of the genesis milestone of the network.
Target Milestone Index uint32 The index of the milestone of which the SEPs within the snapshot are from.
Target Milestone Timestamp uint32 The UNIX timestamp in seconds of the milestone of which the SEPs within the snapshot are from.
Target Milestone ID Array<byte>[32] The ID of the milestone of which the SEPs within the snapshot are from.
Ledger Milestone Index uint32 The index of the milestone of which the UTXOs within the snapshot are from.
Treasury Output Milestone ID Array<byte>[32] The milestone ID of the milestone which generated the treasury output.
Treasury Output Amount uint64 The amount of funds residing on the treasury output.
Protocol Parameters Milestone Option Length uint16 Denotes the length of the Protocol Parameters Milestone Option.
Protocol Parameters Milestone Option
Protocol Parameters Milestone Option that is active at the milestone of which the UTXOs within the snapshot are from.
Outputs Count uint64 The amount of UTXOs contained within this snapshot.
Milestone Diffs Count uint32 The amount of milestone diffs contained within this snapshot.
SEPs Count uint16 The amount of SEPs contained within this snapshot.
Outputs
Output
Milestone Diffs
Milestone Diff
SEPs
SEP Array[32]
Delta snapshot file format

Defines what a delta snapshot contains.

Name Type Description
Version byte Denotes the version of this file format. (Version 2)
Type byte Denotes the type of this file format. Value 1 denotes a delta snapshot.
Target Milestone Index uint32 The index of the milestone of which the SEPs within the snapshot are from.
Target Milestone Timestamp uint32 The UNIX timestamp in seconds of the milestone of which the SEPs within the snapshot are from.
Full Snapshot Target Milestone ID Array<byte>[32] The ID of the target milestone of the full snapshot this delta snapshot builts up from.
SEP File Offset uint64 The file offset of the SEPs field. This is used to easily update an existing delta snapshot without parsing its content.
Milestone Diffs Count uint32 The amount of milestone diffs contained within this snapshot.
SEPs Count uint16 The amount of SEPs contained within this snapshot.
Milestone Diffs
Milestone Diff
SEPs
SEP Array[32]

Updating an existing Delta snapshot file

When creating a delta snapshot, an existing delta snapshot file can be reused.

In order to do that, the following steps need to be done:

  1. Open the existing delta snapshot file.
  2. Read the existing delta snapshot file header.
  3. Verify that Version and Full Snapshot Target Milestone ID match between the existing and new delta snapshot.
  4. Seek to the position of Target Milestone Index and replace it with the new value.
  5. Seek to the position of Target Milestone Timestamp and replace it with the new value.
  6. Seek to the position that is written in the existing SEP File Offset and truncate the file at this position.
  7. Add the additional Milestone Diffs at this position.
  8. Add the new SEPs.
  9. Seek to the position of SEP File Offset and replace it with the new value.
  10. Seek to the position of Milestone Diffs Count and replace it with the new value.
  11. Seek to the position of SEPs Count and replace it with the new value.
  12. Close the file.

Drawbacks

Nodes need to support this new format.

Rationale and alternatives

  • In conjunction with a companion full snapshot, a tool or node can "truncate" the data from a delta snapshot back to a single full snapshot. In that case, the Ledger Milestone Index and Target Milestone Index would be the same. In the example above, given the full and delta snapshots, one could produce a new full snapshot for milestone 1350.
  • Since snapshots may include millions of UTXOs, code generating such files needs to stream data directly onto disk instead of keeping the entire representation in memory. In order to facilitate this, the count denotations for SEPs, UTXOs and diffs are at the beginning of the file. This allows code generating snapshot files to only have to seek back once after the actual count of elements is known.

Copyright

Copyright and related rights waived via CC0.