IOTA Improvement Proposal (IIP) Repository
IIPs are improvement proposals for bettering the IOTA technology stack.
Building the IOTA ecosystem is a community effort, therefore we welcome anyone to propose, discuss and debate ideas that will later become formalized IIPs.
Propose new ideas
Do you have an idea how to improve the IOTA technology stack?
- Head over to the discussions page to browse already submitted ideas or share yours!
- Once your idea is discussed, you can submit a draft IIP (template here as a PR to the repository.
- You will receive feedback from the IIP Editors, core devs and community members to refine your proposal.
- Once accepted, your IIP is merged as Draft.
- It is your responsibility to drive its implementation and to present a clear plan on how the new feature will be adopted by the network.
- Once implementation is ready and testing yields satisfactory result, the IIP becomes Proposed.
- Proposed IIPs that are supported by majority of the network become Active.
You may find more information about the IIP Process in IIP-1.
List of IIPs
- Last updated: 2026-02-24
- The Status of a IIP reflects its current state with respect to its progression to being supported on the IOTA Mainnet.
DraftIIPs are work in progress. They may or may not have a working implementation on a testnet.ProposedIIPs are demonstrated to have a working implementation on the IOTA Devnet or Testnet.ActiveIIPs are supported on the IOTA Mainnet.ReplacedIIPs have been replaced by a newer IIP.ObsoleteIIPs are no longer in use.
| # | Title | Description | Type | Layer | Status |
|---|---|---|---|---|---|
| 1 | IIP Process | Purpose and guidelines of the contribution framework | Process | - | Active |
| 2 | Starfish Consensus Protocol | A DAG-based consensus protocol improving liveness and efficiency | Standards | Core | Proposed |
| 3 | Sequencer Improvements | Improved sequencing algorithm for reducing the number of transaction cancellations | Standards | Core | Active |
| 5 | Move View Functions | A standardized interface for application-specific queries to on-chain state | Standards | Interface | Draft |
| 7 | Validator Scoring Mechanism | An automated and standardized system for monitoring validator behavior and scores | Standards | Core | Draft |
| 8 | Dynamic Minimum Commission based on the Validator’s Voting Power per Epoch | A dynamic minimum validator commission rate set to the validator’s voting power percentage to prevent stake hoarding and promote decentralization | Standards | Core | Draft |
| 9 | Abstract IOTA Accounts | Abstract accounts on IOTA enable smart-contract-based authentication of addresses. | Standards | Core | Draft |
| 10 | Package Metadata | Immutable on-chain object that provides trusted metadata about Move packages during execution | Standards | Core | Draft |
Need help?
If you want to get involved in the community, need help getting started, have any issues related to the repository or just want to discuss blockchain, distributed ledgers, and IoT with other people, feel free to join our IOTA Builder Discord.
IIP-1 IIP Process
iip: 1 title: IIP Process description: Purpose and guidelines of the contribution framework author: Levente Pap (@lzpap)discussions-to: https://github.com/iotaledger/iips/discussions status: Active type: Process created: 2025-02-12 Abstract
An IOTA Improvement Proposal (IIP) is a design document providing information to the IOTA community, or describing a new feature for IOTA or its processes or environment. The IIP should provide a high level technical design or specification and the rationale of the feature.
IIPs are the primary mechanism for proposing new features and standards to the IOTA protocol and related applications, furthermore for collecting input from the wider community and documenting the design decisions that go into the IOTA technology.
IIPs are maintained as text files inside the repository, therefore the history and evolution of protocol features are transparent and well documented.
This IIP defines the IIP Process itself to establish a common way of working.
Motivation
The motivation of this IIP is to create a public platform to discuss improvement ideas related to the IOTA technology and define an easy-to-follow process of contributing to their development and implementation.
Design / Specification
IIP Types
There are 3 types of IIPs:
- A Standards Track IIP describes any change that affects most or all IOTA node implementations, such as a change to the network protocol, a change in transaction validity rules, or any change or addition that affects the interoperability of applications using IOTA. Standards Track IIPs consist of two parts, a design document and a reference implementation. Standards Track IIPs can be broken down into following layers:
- Core: Changes or additions to core features of IOTA, including consensus, execution, storage, and account signatures
- Networking: Changes or additions to IOTA’s mempool or network protocols
- Interface: Changes or additions to RPC or API specifications or lower-level naming conventions
- Framework: Changes or additions to IOTA Move contracts and primitives included within the codebase, such as within the IOTA Framework
- Application: Proposals of new IOTA Move standards or primitives that would not be included within the IOTA codebase but are of significant interest to the developer community
- An Informational IIP describes an IOTA design issue, or provides general guidelines or information to the IOTA community, but does not propose a new feature. Informational IIPs do not necessarily represent an IOTA community consensus or recommendation, so users and implementors are free to ignore Informational IIPs or follow their advice.
- A Process IIP describes a process surrounding IOTA, or proposes a change to (or an event in) a process. Process IIPs are like Standards Track IIPs but apply to areas other than the IOTA protocol itself. They may propose an implementation, but not to IOTA’s codebase; they often require community consensus; unlike Informational IIPs, they are more than recommendations, and users are typically not free to ignore them. Examples include procedures, guidelines, changes to the decision-making process, and changes to the tools or environment used in IOTA development.
It is highly recommended that an IIP outlines a single key proposal, idea or feature; the narrower the scope of the IIP is, the easier it becomes to reach consensus on the proposed feature and incorporate it into the protocol. Several IIPs can form a bundle of changes when linked to each other.
IIP Format and Structure
IIPs must adhere to the format and structure requirements that are outlined in this document. A IIP is written in Markdown format and should have the following parts (optional parts are marked with a *):
Name Description Preamble RFC 822 style headers containing metadata about the IIP, including the IIP number, a short descriptive title (limited to a maximum of 44 characters), a description (limited to a maximum of 140 characters), and the author details. Irrespective of the category, the title and description should not include IIP number. See below for details. Abstract A short summary of the technical issue being addressed by the IIP. Motivation A motivation section is critical for IIPs that want to change the IOTA protocol. It should clearly explain why the existing protocol specification is inadequate to address the problem that the IIP solves. IIP submissions without sufficient motivation may be rejected outright. Specification The technical specification should provide a concise, high level design of the change or feature, without going deep into implementation details. It should also describe the syntax and semantics of any new feature. Rationale The rationale fleshes out the specification by describing what motivated the design and why particular design decisions were made. It should describe alternate designs that were considered and related work, e.g. how the feature is supported in other languages. The rationale may also provide evidence of consensus within the community, and should discuss important objections or concerns raised during discussion. Backwards Compatibility* All IIPs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The IIP must explain how the author proposes to deal with these incompatibilities. IIP submissions without a sufficient backwards compatibility treatise may be rejected outright. Test Cases* Test cases for an implementation are mandatory for IIPs that are affecting consensus changes. Tests should either be inlined in the IIP as data or placed in the IIP folder. Reference Implementation* An optional section that contains a reference/example implementation that people can use to assist in understanding or implementing this specification. Copyright All IIPs must be in the public domain. See the bottom of this IIP for an example copyright waiver. IIP Template
The template to follow for new IIPs is located in the repository.
IIP Process
Parties involved in the process are:
- IIP author: you, the champion who proposes a new IIP. It is the responsibility of the IIP author to drive the progression of the IIP to
Activestatus. This includes initiating public discussion and implementing the proposal as well.- IIP editor: they deal with administering the IIP process and ensure process requirements are fulfilled.
- Core contributors: technical experts of IOTA who evaluate new IIPs, provide feedback and ensure that only sound and secure features are added to the protocol.
IIP Statuses
The status of the IIP describes its current stage in the IIP process.
Status Description Idea An idea for an improvement to the IOTA technology. Not yet tracked as an official IIP. Draft The idea has been formally accepted in the repository, and is being worked on by its authors. Proposed The IIP has a working implementation and has clear plans on how to progress to Activestatus.Active The IIP is deployed to the main network or some IIP specific adoption criteria has been met. Deferred The IIP author(s) are not working on the IIP currently, but plan to continue in the future. IIP is on hold. Rejected The IIP is rejected. Withdrawn The IIP has been withdrawn by the IIP author(s). Replaced The IIP is replaced by a newer IIP. Must point to the new IIP in the header. Obsolete The IIP is rendered obsolete by some future change. IIP Workflow
How are new proposal get added to the protocol?
All IIPs begin life as an
Ideaproposed in the public IOTA discussion forum, that is the GitHub Discussion page of the IIP repository. A public, open discussion should predate any formal IIP submission. If you want to propel your proposal to acceptance, you should make sure to build consensus and support in the community around your proposed changes already in the idea stage.Once the idea has been vetted, your next task is to submit a
DraftIIP to the IIP repository as a pull request. Do not assign a IIP number yet to the draft, but make sure that the proposal is technically sound and follows the format and style guides of the IIP Process. Create a sub-folder underiipsfolder with the title of the draft (iips/title_of_draft/) and put all assets in this folder.A IIP editor reviews your PR and assigns a IIP number to the draft.
Core contributors as well as the broader public evaluate the draft proposal and might ask for modifications or clarifications. The proposal can only be merged into the repository as a draft if it represents a net improvement and does not complicate the protocol unduly.
The IIP is merged into the repo with
Draftstatus by IIP editor/author.When a working implementation is presented and there are clear plans on how to progress the IIP to completion, the IIP author submits a subsequent PR that links its implementation to the IIP and progresses it to
Proposedstage. The IIP is ready to be deployed on testnet.When a
ProposedIIP is deemed to have met all appropriate criteria and its implementation has been demonstrated to work reliably in testnet environment, it is ready to be moved to the main network. Upon deployment, the IIP status must change toActive.How can a IIP transition from one status to another?
A
DraftIIP might be moved toDeferredstatus by the IIP author(s) when they are no longer working on the proposal, but plan to continue it in the future. IIP editors might also move any IIPs toDeferredif the proposal is not making progress.A
DraftIIP might be moved toWithdrawnstatus by the IIP author(s).A
DraftIIP might be moved toRejectedstatus by IIP editor(s) or Core contributors if it does not meet the appropriate IIP criteria, or no relevant progress has been demonstrated on the IIP for at least 3 years.A
DraftIIP might be moved toProposedstatus by IIP author(s) if it is considered complete, has a working implementation and clear plans on how to progress it toActivestatus.A
ProposedIIP might be moved toActivestatus if a IIP specific adoption criteria has been met. For Core IIPs this means deployment on the main network.A
ProposedIIP might be moved toRejectedstatus by IIP editor(s) or Core contributors if its implementation puts unduly burden and complexity on the protocol, or other significant problems are discovered during testing.An
ActiveIIP might be moved toReplacedstatus by a newer IIP. The replaced IIP must point to the IIP that replaces it.An
ActiveIIP might be moved toObsoletestatus when the feature is deprecated.How to champion the IIP Process as a IIP author?
- Browse the idea discussion forum before posting a new IIP idea. Someone else might already have proposed your idea, or a similar one. Take inspiration from previous ideas and discussions.
- It is your responsibility as a IIP author to build community consensus around your idea. Involve as many people in the discussion as you can. Use social media platforms, Discord or Reddit to raise awareness of your idea.
- Submit a draft IIP as a PR to the IIP repository. Put extra care into following IIP guidelines and formats. IIPs must contain a link to previous discussions on the topic, otherwise your submissions might be rejected. IIPs that do not present convincing motivation, demonstrate lack of understanding of the design’s impact, or are disingenuous about the drawbacks or alternatives tend to be poorly-received.
- Your draft IIP gets a IIP number assigned by a IIP editor and receives review and feedback from the larger community as well as from Core contributors. Be prepared to revise your draft based on this input.
- IIPs that have broad support are much more likely to make progress than those that don’t receive any comments. Feel free to reach out to the IIP editors in particular to get help to identify stakeholders and obstacles.
- Submitted draft IIPs rarely go through the process unchanged, especially as alternatives and drawbacks are shown. You can make edits, big and small, to the draft IIP to clarify or change the design, but make changes as new commits to the pull request, and leave a comment on the pull request explaining your changes. Specifically, do not squash or rebase commits after they are visible on the pull request.
- When your draft IIP PR gets enough approvals from IIP editors and Core contributors, it can be merged into the repository, however, your job is far from complete! To move the draft into the next status (proposed), you have to demonstrate a working implementation of your IIP. For Core IIPs, seek help from protocol developers and/or client teams to coordinate the feature implementation. For IRCs for example you need to provide their implementation yourself.
- You also need to present a clear plan on how the IIP will be moved to the
Activestatus, by for example agreeing on a IIP deployment strategy with Core contributors.- To move your
DraftIIP to theProposedphase, submit a subsequent PR that links its implementation and devises its route to becomeActive. The latter might be an additional document in the IIP’s folder, a link to a public discussion or a short description or comment on the PR itself.- To move your
ProposedIIP toActivestatus you need to demonstrate that it has met its specific adoption criteria. For Core IIPs, this means that majority of network nodes support it. For other IIPs, especially for IRCs, adoption might mean that the standard is publicly available, well documented and there are applications building on it.IIP Header Preamble
Each IIPs must have an RFC 822 style header preamble preceded and followed by three hyphens (—). The headers must appear in the following order. Headers marked with “*” are optional and are described below. All other headers are required.
Field Description iipIIP number, or “?” before being assigned (assigned by IIP editor) titleFew words describing the IIP, maximum 44 characters description*One full short sentence authorA comma separated list of the author’s or authors’ name + GitHub username (in parenthesis), or name and email (in angle brackets). Example, FirstName LastName (@GitHubUsername), FirstName LastName foo@bar.com, FirstName (@GitHubUsername) and GitHubUsername (@GitHubUsername) discussions-to*The url pointing to the official discussion thread statusCurrent status of the IIP. One of: Draft,Proposed,Active,Deferred,Rejected,Withdrawn,ObsoleteorReplacedtypeIIP type, one of: Standards Track,ProcessorInformationallayer*Only for Standards Track, defines layer: Core,Networking,Interface,FrameworkorApplicationcreatedDate created on, in ISO 8601 (yyyy-mm-dd) format requires*Link dependent IIPs by number replaces*Older IIP being replaced by this IIP superseded-by*Newer IIP replaces this IIP withdrawal-reason*A sentence explaining why the IIP was withdrawn. (Optional field, only needed when status is Withdrawn)rejection-reason*A sentence explaining why the IIP was rejected. (Optional field, only needed when status is Rejected)Linking IIPs
References to other IIPs should follow the format IIP-N where N is the IIP number you are referring to. Each IIP that is referenced in an IIP MUST be accompanied by a relative Markdown link the first time it is referenced, and MAY be accompanied by a link on subsequent references. The link MUST always be done via relative paths so that the links work in this GitHub repository or forks of this repository. For example, you would link to this IIP with
[IIP-1](../IIP-0001/iip-0001.md).Auxiliary Files
Images, diagrams and auxiliary files should be included in the subdirectory of the IIP. When linking to an image in the IIP, use relative links such as
[IIP Process Diagram](../IIP-0001/process.png).Linking to external resources
External links should not be included, except to the IOTA repository.
Transferring IIP Ownership
It occasionally becomes necessary to transfer ownership of IIPs to a new champion. In general, we’d like to retain the original author as a co-author of the transferred IIP, but that’s really up to the original author. A good reason to transfer ownership is because the original author no longer has the time or interest in updating it or following through with the IIP process, or has fallen off the face of the ’net (i.e. is unreachable or isn’t responding to email). A bad reason to transfer ownership is because you don’t agree with the direction of the IIP. We try to build consensus around a IIP, but if that’s not possible, you can always submit a competing IIP.
If you are interested in assuming ownership of a IIP, send a message asking to take over, addressed to both the original author and the IIP editors. If the original author doesn’t respond to the email in a timely manner, the IIP editors will make a unilateral decision (it’s not like such decisions can’t be reversed :)).
IIP Editors
Name GitHub username Email address Affiliation Kevin Mayrhofer Dr-Electron kevin.mayrhofer@iota.org IOTA Foundation Gino Osahon Ginowine gino.osahon@iota.org IOTA Foundation Lucas Tortora lucas-tortora lucas.tortora@iota.org IOTA Foundation Salaheldin Soliman salaheldinsoliman salaheldin.soliman@iota.org IOTA Foundation Vivek Jain vivekjain23 vivek.jain@iota.org IOTA Foundation Levente Pap lzpap levente.pap@iota.org IOTA Foundation IIP Editor Responsibilities
IIP editors’ essential role is to assist and guard the process of contributing to the IOTA ecosystem, provide help and directions to community members as well as to external contributors. If you have a question regarding the IIP process, reach out to them, they will point you to the right direction.
They ensure that only quality contributions are added as IIPs, provide support for IIP authors, furthermore monitor that the IIP process is fair, objective and well documented.
For each new IIP that comes in, an editor does the following:
- Read the IIP to check if it is ready: sound and complete. The ideas must make technical sense, even if they don’t seem likely to get to
Activestatus.- The title should accurately describe the content.
- Check the IIP for language (spelling, grammar, sentence structure, etc.), markup (GitHub flavored Markdown), code style.
If the IIP isn’t ready, the editor will send it back to the author for revision, with specific instructions.
Once the IIP is ready to be merged as a draft, the editor will:
- Assign a IIP number that does not conflict with other IIP numbers. It might be the PR number, but might also be selected as the next unused IIP number in line.
- Merge the corresponding pull request.
- Send a message back to the IIP author with the next step.
The editors don’t pass judgment on IIPs. We merely do the administrative & editorial part.
Core Contributors
Core contributors consists of several core developers of the IOTA ecosystem. Their job is to evaluate technical details of IIPs, judge their technical feasibility and safeguard the evolution of the protocol. Core improvement ideas must be carefully thought through and their benefits must outweigh their drawbacks.
In order for a draft IIP to be accepted into the repo, it must be signed-off by Core contributors. It is also this group that gives the green light for drafts to become proposed or active.
Rationale
The IIP process is intended to replace the formerly adopted Tangle Improvement Proposal (TIP) process due the underlying technological shift.
TIPs refer to the previous generation of IOTA technology and hence are outdated. In order not to confuse contributors, IIP is introduced as a new process to propose, discuss and implement new ideas for the IOTA technology stack.
In order not to reinvent the wheel, the IIP Process draws heavily on the BIP and EIP processes.
Backwards Compatibility
- The current
iotaledger/tipsrepository will be archived.- All TIPs become
Obsoleteand are no longer in use.References
- BIP-1 and BIP-2, Bitcoin Improvement Proposal Purpose and Guidelines
- EIP-1, Ethereum Improvement Proposal Purpose and Guidelines
- CIP-1, Cardano Improvement Proposal Process
Copyright
Copyright and related rights waived via CC0.
IIP-2 Starfish Consensus Protocol
iip: 2 title: Starfish Consensus Protocol description: A DAG-based consensus protocol improving liveness and efficiency author: Nikita Polianskii (@polinikita)discussions-to: https://github.com/iotaledger/IIPs/discussions/10 status: Proposed type: Standards Track layer: Core created: 2025-04-16 requires: None Abstract
This IIP proposes Starfish, a DAG-based consensus protocol enhancing Mysticeti. Starfish decouples block headers from transaction data, enabling push-based header dissemination, and encodes transaction data into Reed-Solomon shards for efficient reconstruction. These mechanisms improve liveness, reduce communication complexity to linear, and lower storage overhead, even in Byzantine environments.
Motivation
Starfish addresses three limitations in Mysticeti:
Liveness. Because of Byzantine behaviour, slow or deadlocked network connection and/or slow computational capability, some validators, hereafter called slow validators, can share its own block to only a few selected peers in time. Blocks that reference blocks of slow validators can stay suspended in the block managers for a long time depending on the depth of the missing causal history. In Mysticeti, the blocks of the recent DAG are fetched by explicit requesting the missing parents of a given suspended block. This slow synchronization of the recent DAG can trigger liveness issues in Mysticeti. Starfish allows for faster synchronization of the recent DAG.
Communication complexity. For n=3f+1, we can observe in the network with f slow validators situations when each block of a slow validator, while being disseminated to f not-slow validators, will be requested from these validators by other validators. This may lead to impractical quadratic communication complexity O(n^2). Starfish keeps the communication complexity linear for all circumstances by using Reed-Solomon codes and using shards in dissemination of other blocks.
Storage overhead. Now each validator store the whole transaction data associated with a block. With Starfish, we can store block headers with only one shard of transaction data, reducing the size of the consensus database.
Specification
Starfish requires implementation of a new crate as it contains many new components in consensus and modifies existing modules. Below, we sum up the most important changes compared to the current version of Mysticeti:
- Block Structure:
- Separation of Header and Data: Blocks are split into headers (containing metadata) and body optionally containing transaction data or shard. Only headers are signed, and the block digest is calculated solely from the header.
- Data Commitment: Blocks include a Merkle root commitment to encoded transaction data.
- Data Acknowledgment: Once the transaction data of a block is available by a validator, it should acknowledge that in next block.
- Sharding with Reed-Solomon Codes: Transaction data is encoded into shards using Reed-Solomon codes, allowing reconstruction from a subset of shards. The commitment could be a Merkle tree on the encoded shards. For own blocks, send full transaction data; for blocks of other validators, send shards together with proofs.
- Encoder/Decoder: block verifier, core and data manager should be equipped by [n,f+1] Reed-Solomon encoders and decoders to a) ensure the correctness of the computed transaction commitment, b) be able to decode the transaction data from locally available shards, c) create new blocks
- Block Verifier: Validates incoming block headers independently. If transaction data is received, verifies its commitment against the header to ensure correctness.
- Data Manager: Handles transaction data retrieval and reconstruction. Requests missing data from the block author first (full data with sharding) or from nodes acknowledging the data (shards). Once reconstructed, data is forwarded to the DagState for storage and future serving.
- Block Creation: Generates blocks with separate headers and transaction data. The data commitment should be computed based on the encoded transaction data by using some commitment scheme that allow for proofs, e.g. Merkle tree. Includes in a block header pending data acknowledgments.
- Dag State: In addition, it should track for which blocks, a validator has transaction data (pending acknowledgments). Another important structure should provide information about who knows which block headers to disseminate only those block headers that are indeed needed.
- Linearizer: Tracks data acknowledgments for past blocks, including only quorum-acknowledged data in new commits.
- Streaming own blocks: Broadcast own blocks with their transaction data and block headers potentially missing by peers.
- Storage: Separates storage for headers and shards and own transaction data. Triggers data storage upon availability to minimize overhead.
- Commit Structure: Includes references to headers traversed by the Linearizer for data acknowledgment collection. Lists data blocks with a quorum of acknowledgments with optional bitmaps of acknowledging nodes to optimize data retrieval.
Starfish can be enabled via protocol parameters.
For theoretical details, see eprint.iacr.org/2025/567.
Rationale
Starfish’s design is driven by the need to enhance Mysticeti’s performance in bad network conditions and/or adversarial environment. Key decisions include:
- Header-Data Decoupling: Since the constructed DAG in Mysticeti is uncertified, the block propogation is one of the key issues. We decouple the header from data in the block structure to ensure that we can push all the required block headers. Only block headers are needed for driving the consensus. The transaction data can be retrieved once sequenced.
- Data Acknowledgments: Since we decouple headers from data, we can’t simply sequence the data associated with a block by a vanilla Mysticeti commit rule as it might be unavailable by a majority of the network. Thereby, one needs to include in block headers acknowledgments about transaction data availability for past blocks. In addition, for sequencing transactions one needs to get a quorum of acknowledgments.
- Reed-Solomon Sharding: Chosen for its ability to reconstruct data from any f+1 shards, ensuring linear communication complexity. Reed-Solomon codes are optimal in terms of recoverability and this is a primary reason why we stick with them. In addition, there are libraries (e.g. https://crates.io/crates/reed-solomon-simd) that are very CPU efficient and consume little memory.
- Merkle Tree Commitments: Preferred for their simple proof generation, enabling shard verification without full data.
- Data Manager: To ensure live consensus decisions, it is enough to have available all block headers in the causal history of blocks. Data manager is needed to fetch potentially missing transaction data from peers once it is sequenced and available to a quorum of the network.
Backwards Compatibility
Starfish introduces backwards incompatibilities with Mysticeti consensus crate:
- Block Structure: The new header-data split and sharding are incompatible with Mysticeti’s monolithic blocks.
- Storage: Mysticeti’s full-data storage is replaced by header-shard storage. This storage is not replacable across validators.
- Protocol Logic: Components like block verifier and linearizer require updates, breaking compatibility with Mysticeti’s logic.
Mitigation:
- Use crate
iotaledger/iota/crates/starfishand enable Starfish with protocol parameters.- Test compatibility in a public testnet to ensure node upgrades are seamless.
- Remove
iotaledger/iota/consensusat a later pointTest Cases
The new consensus crate will need to be extensively tested. In particular, all exising modules, e.g. block verifier, will require modifications in existing tests. All new modules (e.g. encoder/decoder) will require testing of typical possible scenarios. To ensure that the new consensus serves its purposes, there should be tests mimicking slow validators that fail to properly disseminate their blocks and this behaviour should not affect the liveness of the consensus.
Reference Implementation
A prototype implementation is available at github.com/iotaledger/starfish. It includes:
- A new crate with core Starfish-specific components (tracking who knows which block headers, encoding/decoding, data fetcher, Starfish linearizer sequencer, etc.).
- Modified Mysticeti modules (Block store, Linearizer, Block Verifier).
- Simulation scripts to test latency, throughput and bandwidth efficiency locally and in a geo-distributed network.
This draft aids consensus developers and will be refined for production.
Copyright
Copyright and related rights waived via CC0.
IIP-3 Sequencer Improvements
iip: 3 title: Sequencer Improvements description: A sequencing algorithm that assigns the earliest available execution slot to transactions. author: Can Umut Ileri (@cuileri) and Andrew Cullen (@cyberphysic4l) discussions-to: https://github.com/iotaledger/IIPs/discussions/14 status: Active type: Standards Track layer: Core created: 2025-04-23
Abstract
The sequencer is a protocol component that directly follows consensus and is responsible for assigning the order of execution of transactions writing to shared objects. The current sequencing algorithm requires the preservation of the gas price ordering of transactions that touch any common shared object. This requirement is too strict and results in suboptimal use of resources. In this IIP, we propose an alternative algorithm that preserves gas price ordering only for transactions that touch all the same shared objects. With this relaxed requirement, we can allow the sequencing phase to order transactions more flexibly, enabling a better resource utilization during execution. It outputs a new execution order (which may differ from the gas price ordering) for transactions with shared object dependencies without implicitly binding transactions to specific processes. In the case of both the existing sequencer and this proposal, transactions are passed in the order provided by the sequencer to a simple scheduler module which assigns workers for the execution of each transaction and ensures that a transaction can only begin executing if all transactions touching common shared objects are finished executing.
Motivation
To understand the motivation for the sequencer improvements proposed here, consider the following example.
The consensus module produces a commit containing the following set of transactions without regard for their exact ordering within the commit. The lettered blocks within each transaction indicate that the transaction writes to that shared object. In this example, we have four shared objects, a, b, c and d.
These transactions can then be placed in descending order of gas price where transaction number 1 pays the highest gas price.
Suppose, for the purpose of this example, that each transaction requires one time unit to execute, and for each commit, we want to limit the longest sequential execution to 5 time units to help the execution keep up with the rate of consensus commits. We can clearly see that if we executed every transaction sequentially, it would require 11 time units, so 6 of these transactions would need to be deferred. In fact, we can do much better than this because transactions with no shared object dependencies can be executed in parallel, but we also need to consider preserving the gas-price ordering of transactions to ensure fairness for users.
The job of the sequencer is to determine the execution order of transactions with shared object dependencies given a constraint on the total expected execution time for all sequentially executed transactions within a consensus commit. The limit placed on expected execution time serves as a congestion control, ensuring that the execution can always keep up with the rate of consensus commits.
The existing sequencer
The existing sequencer iterates over the transactions in decreasing gas price order and assigns a new order allowing for parallel execution of transactions when they have no shared objects in common. If a pair of transactions write to any common shared object, their gas-price ordering must be preserved in this algorithm. The sequencing result in this example is as follows with the existing sequencer.
The first three transactions all touch a, so they must be executed sequentially in their gas price order.
Transaction 4 must be executed after transaction 3 in the current sequencer because they both touch c, so their gas price order must be preserved.
It would be feasible to execute transaction 4 in parallel with transaction 1 or 2 because they do not touch any common shared objects, but this would violate the gas price ordering between 3 and 4.
Transaction 5 is placed after 4, but transaction 6 must be deferred because the cumulative estimate execution time for sequentially executing the previous 5 transactions exceeds the maximum execution duration per commit in this example.
Transactions 8, 9 and 11 are all able to be executed in parallel without violating the gas price ordering of any transactions that touch common shared objects, but transactions 7 and 10 must also be deferred.
The proposed algorithm
In our proposal, we relax the requirement on gas price ordering so it only needs to be preserved if a transaction touches all the same shared objects. We assign the new transaction order one by one as before, but now we simply assign the transaction to the earliest execution start time where there is no transaction already sequenced that touches any common shared objects. The result for our example is illustrated below.
The first three transactions are the same as before, but when we get to transaction 4, illustrated in orange here, we can sequence it in parallel to transaction 1 this time, placing transaction 4 in a higher position than transaction 3 because we do not need to preserve their gas price order as they do not touch an identical set of shared objects. Transactions 5 and 11 can also be sequenced in earlier slots than before this time due to the new algorithm. Additionally, transactions 6 and 10 can now be executed instead of being deferred as indicated in red.
Specification
The proposed algorithm is specified in this paper. The paper also provides proofs of a gas price fairness property and of improved throughput compared with the existing algorithm.
Backwards Compatibility
The proposed sequencing algorithm is not backwards compatible with the existing one, so must be implemented as a protocol upgrade with a feature flag to enable the new functionality. This has been done in the implementation referenced below.
Reference Implementation and Testing
This IIP has been implemented in this PR including unit tests for the new functionality with example scenarios. Additionally, the new sequencer has been compared extensively to the old sequencer using a spammer as detailed in this report. We present a subset of representative here to make them publicly accessible.
Experiment Results
The proposed algorithm reduces the number of deferments at each round, which prevents transaction cancellations caused by repeated deferments during periods of congestion. To measure the performance of the proposed algorithm in reducing the number of cancellations, we stressed the network using a synthetic transaction set that consists of both directly and indirectly conflicting transactions. (An indirect conflict refers to two transactions with disjoint input objects sets that nonetheless both share at least one object with other conflicting transactions.)
The figure below shows the cumulative number of cancellations over time during different trials of an experiment with 4000 transactions for both existing (canonical) and proposed (improved) algorithms. The number of cancellations was reduced by 60% in average.
For comparison, the figure below shows the cumulative number of successfully executed transactions from the same experiment.
Note that the above experiment exemplifies the potential benefits of the improved sequencer well because the pattern of direct and indirect conflicts show the shortcomings of the canonical sequencer. However, for any consensus commit, the improved sequencer will always sequence at least as many transactions as the canonical sequencer as proven in this paper. To further demonstrate the effectiveness of the new approach, we repeated experiments with transactions whose inputs sets are generated randomly, without controling the ratio of direct/indirect conflicts. We note that due to the random nature of the experiments, transactions are most likely to be in direct conflict, meaning there is less room for improvement. Even in this case, the number of cancellations was reduced by 7% in average. The figure below summarizes the comparison between the two algorithms in terms of the average number of cancellations for varying number of objects per transaction.
Copyright
Copyright and related rights waived via CC0.
IIP-5 Move View Functions
iip: 5 title: Move View Functions description: A standardized interface for application-specific queries to on-chain state author: Levente Pap (@lzpap), Mirko Zichichi (@miker83z) discussions-to: https://github.com/iotaledger/IIPs/discussions/18 status: Draft type: Standards Track layer: Interface created: 2025-07-22 requires: None Abstract
This proposal introduces a standardized interface for enabling ergonomic and application-specific queries to on-chain state. Move view functions are developer defined on-chain read API that can be easily queried off-chain without requiring transaction signing or state mutation (based on the Ethereum’s Solidity view function feature concept). The proposal is made to improve developer productivity and simplifying access to on-chain data through RPC, CLI, and SDK interfaces.
Motivation
Currently, developers must write a significant amount of custom client-side logic to inspect on-chain data. This often involves multiple layers of fetching object IDs, deserializing raw bytes, and navigating dynamic fields just to reach an information stored within a Move object. While tools like
dev-inspectordry-runexist, they suffer from limitations such as complexity in usage and difficulty in decoding return values.This proposal aims to address the following developer pain points:
- Being forced to understand the logic of a Move data structure and then how to map that to RPC calls in order to fetch a piece of data, e.g., getting nested object ids.
- Some data structures cannot easily be accessed through current RPC calls because of the way they use dynamic fields, e.g., Bag and Table data structures.
- Using
dev-inspectcan become complicated because the function return values the developer gets are BCS bytes and then these need to be parsed, assuming the return type is known.Specification
This proposal defines a developer interface to support Move View Functions. The following is a list of specifications related to its implementation:
- A View Function is a function in a Move module with a return type that does not alter the state of the ledger, i.e., when using the Move View Function interface no transactions are submitted to the network for their inclusion into the ledger.
- Move View Functions are callable via at least one new RPC method that supports type parameters and function arguments.
- The use of such interface MUST NOT require signature checks, i.e., invoking a Move View Function that takes as input an Owned Object MUST be made possible to anyone and not only to the owner of such object.
- The use of such interface MUST NOT require the usage of a gas coin; spam attacks SHOULD be dealt with at the RPC level and not at the execution level; this is because the Move View Function does not alter the state of the ledger and thus cannot deduce gas.
- Returned results MUST be resolved, i.e., the Move types deserialized, and then formatted in JSON.
- Must be integrated into the SDKs and the CLI.
The following specifies the proposed developer interfaces.
iota_view JSON-RPC method
Executes a Move View Function. Which allows for nearly any Move call for a function with a return type and with any arguments. The function’s result values are provided and decoded using the appropriate Move type.
Parameters
Name Required Description function_name Yes The Move function fully qualified name as <package_id>::<module_name>::<function_name>, e.g.0x3::iota_system::get_total_iota_supplytype_args <[TypeTag]> Yes The type arguments of the Move function arguments <[IotaJsonValue]> Yes The arguments to be passed into the Move function, in IotaJson format Result
Name Required Description error <[string, null]> No Execution error from executing the view function results <[IotaMoveViewResult, null]> No Execution results (including return values) from executing the view function Example
The following example can be taken as a reference for the API model.
Request:
{ "method": "iota_view", "params": { "functionName": "0x5e7a300e640f645a4030aeb507c7be16909e6fa9711e7ca2d4397bbd967d5c50::auction::get_auction_metadata", "typeArgs": [], "arguments": [ "auc.iota", "0x31deb8cbd320867089d52c37fed2d443520aac0fc5a957de1f64f9135b83f42b" ] } }Response:
{ "results": [ "start": "447575403174913", "end": "447576324774913", "address": "0xc9f649324694c0c18c6278c3a81945fb3ef0c9b91f21dd5b6a4364447ee348df", "value": "500000000" ] }view GraphQL RPC query
A new GraphQL read query is added to the IOTA GraphQL RPC interface with the following structure:
view( functionName: String! typeArguments: [String] arguments: [String] ): ViewResults!Example
Query:
view( functionName: "0x5e7a300e640f645a4030aeb507c7be16909e6fa9711e7ca2d4397bbd967d5c50::auction::get_auction_metadata" typeArgs: [] arguments: [ "auc.iota", "0x31deb8cbd320867089d52c37fed2d443520aac0fc5a957de1f64f9135b83f42b"] ) { errors results { json } }Response:
{ "data": { "view": { "results": { "start": "447575403174913", "end": "447576324774913", "address": "0xc9f649324694c0c18c6278c3a81945fb3ef0c9b91f21dd5b6a4364447ee348df", "value": "500000000" } } } }Rationale
The implementation of the developer interface specified above entails no required changes to the Move language. Such an interface can be implemented as a JSON or GraphQL RPC backend that relies on the existence of a
dev-inspectgRPC call to an IOTA full node.This means that
function_nameallows to fetch a Move View Function from a bytecode stored in an on-chain package. Then the Move type layout of the function parameters and return values can be determined. Finally, adev-inspectMove Call can be constructed and executed using thetype_argsandargumentsparameters and then its return values can be resolved.However, a future IIP could specify how to make the on-chain read API explicit (e.g., adding a view function annotation to the Move language such as in the Aptos view function)).
Backwards Compatibility
This proposal is fully backwards compatible. In that case of a combination of this IIP with another IIP specifying an explicit on-chain read API, the developer interface proposed in here could be limited only to that explicit API.
Test Cases
The new developer interface will need to be extensively tested.
Reference Implementation
There is no reference implementation at the time of writing this IIP.
Copyright
Copyright and related rights waived via CC0.
iip: 7 title: Validator Scoring Mechanism description: An automated and standardized system for monitoring validator behavior and scores author: Olivia Saa (@oliviasaa), Andrew Cullen (@cyberphysic4l) discussions-to: https://github.com/iotaledger/IIPs/discussions/21 status: Draft type: Standards Track layer: Core created: 2026-01-05 requires: None Motivation
Validators are the backbone of the IOTA network. The performance of each validator directly influences the network’s overall efficiency and usability. Therefore, it is essential to have a reliable automated system for monitoring their behavior and performance.
Currently, a number of metrics are tracked for all validators which allows manual identification of performance issues, but the only mechanism for penalizing underperforming validators is a manual reporting system. This system requires a quorum of validators to explicitly report a misbehaving peer in order to reduce their rewards. Such an approach is impractical, as it demands continuous manual monitoring, which is an unreasonable burden for most operators. Moreover, no standard criteria exist to guide reporting decisions, resulting in inconsistent and arbitrary thresholds set independently by each validator.
We propose an automated and standardized system for monitoring validator behavior, culminating in a commonly agreed score that reflects each validator’s performance during an epoch. These scores could subsequently be used to directly modify the rewards distributed at the epoch’s end, but the details of reward adjustment are outside the scope of this improvement proposal.
Specification
Performance Metrics
Each validator will monitor its peers throughout an epoch, collecting performance metrics for every other validator. Regardless of the exact set of metrics used, they are divided into two categories: provable and unprovable metrics:
- Unprovable metrics: These represent misbehaviours for which a validator cannot produce a proof. Examples include malformed blocks or the dissemination of invalid blocks. Validators will collect and disseminate counts for those unprovable metrics.
- Provable metrics: These include signed but invalid blocks and equivocations. Validators should produce proofs of these behaviours throughout the epoch and disseminate them.
We propose to introduce a new
ConsensusTransactionKindnamedMisbehaviorReportspecifically for the propagation of both proofs of misbehaviors and the unprovable counts related to metrics collected throughout the epoch. Whenever a block containing a transaction of this type is committed, validators update their counts for provable misbehaviours and also store the sender’s view of the unprovable metrics. This type of transaction should be sent with a reasonable periodicity, so proofs don’t accumulate too much, but at the same time, without taking unnecessary block space. We propose to follow the same periodicity of the checkpoint creation.Aggregating Metrics and Calculating Scores
At the end of each epoch, validators should aggregate the different perceptions of the committee about all unprovable metrics in a deterministic way. With this aggregation and the provable counts, they calculate a score for each validator.
Scores can be updated during the epoch according to a partial count of the validators’ misbehaviours for monitoring purposes. Furthermore, metric counts and the score itself are used by the protocol at the epoch end to adjust rewards. Thus we calculate scores also with the same periodicity as checkpoint creation.
When the very last checkpoint of the epoch is created, all validators share the same view of all
MisbehaviorReporttransactions from the epoch so a validator can safely and deterministically calculate a global score from the proofs and counts therein. We do not prescribe any specific form to the scoring function in this improvement proposal other than being a weighted combination of each of the metrics included in theMisbehaviorReporttransactions.Rationale
Performance Metrics
The categorization of metrics as provable or unprovable allows them to be treated differently by applying different weights to these behaviors in the scoring function. Unprovable metrics are highly gameable, and although we can reduce the impact of innaccurate reporting through aggregation of scores, unprovable metrics should not lead to severe penalties. Provable metrics, on the other hand, offer a reliable measurement of specific aspects of validator performance and potential malicious behaviour, and can therefore be weighted more heavily in a scoring function to provide stronger incentives against these misbehaviors.
The mechanisms for sharing metrics also differs between provable and unprovable misbehaviors. Unprovable metrics are entirely local to each validator, so counts of each misbehavior must be explicitly shared by validators and agreed upon through the consensus mechanism. Conversely, when using proofs for misbehaviors embedded in blocks, validators already have a common view of all provable metrics. Thus, there is no need to explicitly report any count or score related to provable metrics at the epoch end.
Aggregating Metrics and Calculating Scores
The rationale for aggregating metrics and calculating scores during checkpoint creation is that all scores calculated use globally agreed values. Furthermore, this timing of score calculation also coincides with the epoch change mechanism which calculates critical information for advancing to the next epoch. By calculating scores each checkpoint, we ensure all validators have the same scores calculated at the moment of epoch change which ensures scores can be used to modify epoch rewards as part of future protocol improvements.
Reference Implementation
An initial set of metrics has already been implemented in the iota repository, along with a simple scoring function that serves as a placeholder for a more complete version. This reference implementation is available in the (already merged) PR#7604 and PR#7921 The remaining components required to achieve consensus on an aggregated score are implementated in this PR#8521. An example of a scoring function can also be seen in this latter PR.
Backwards Compatibility
The introduction of a new type of consensus message is not backward compatible and must be implemented as a protocol upgrade enabling the new functionality. All other changes are either local to the node (as storing and counting metrics). Those local changes should not cause any node behaviour or agreement problems.
IIP-8 Dynamic Minimum Commission based on the Validator's Voting Power per Epoch
iip: 8 title: Dynamic Minimum Commission based on the Validator's Voting Power per Epoch description: A dynamic minimum validator commission rate set to the validator's voting power percentage to prevent stake hoarding and promote decentralization. author: DLT.GREEN (@dlt_green) discussions-to: https://github.com/iotaledger/IIPs/discussions/29 status: Draft type: Standards Track layer: Core created: 2026-01-16
Abstract
This IIP introduces a dynamic minimum validator commission rate that is automatically set to the validator’s effective voting power percentage (VP%) at the start of each epoch. Implemented as a simple max(validator_set_commission, VP%) enforcement, it prevents large validators from using persistently low or zero commissions to hoard stake, thereby reducing centralization risks and improving the sustainability and competitiveness of smaller validators.
Due to the existing protocol cap on individual validator voting power at 10%, the enforced minimum commission will never exceed 10%.
Motivation
In the current IOTA staking system, validators are free to set any commission rate, including 0%. Large validators (or those with significant self-stake) can leverage low commissions to attract a disproportionate share of delegations, leading to concentration of voting power. This creates a feedback loop where dominant validators become increasingly attractive to delegators seeking maximum rewards, marginalizing mid-sized and smaller validators and threatening long-term network decentralization.
The protocol already caps individual validator voting power at 10% (with excess stake redistributed to promote balance). However, low/zero-commission strategies still incentivize stake hoarding up to this cap. Tying the minimum commission to VP% provides a proportional economic disincentive without introducing new caps or complexity.
A recent snapshot (January 2026) showed that only ~6 out of 73 active validators would be immediately affected by this rule, indicating minimal short-term disruption while providing a meaningful guardrail against further centralization. Community discussions highlighted broad concern about stake hoarding and strong preference for a lightweight, proportional solution.
Specification
At the start of each epoch, during committee selection and staking reward calculations:
-
Use the validator’s effective voting power percentage (VP%) as determined by the protocol (already capped at a maximum of 10%).
-
Enforce the effective minimum commission for the epoch:
effective_commission = max(validator_set_commission, VP%)
The validator’s publicly set commission remains unchanged for display and future epochs; only the effective rate applied to rewards in the current epoch is adjusted upward if necessary.
No other changes to delegation, reward distribution, or committee selection mechanics are required.
Rationale
- Proportionality: Tying the minimum commission directly to influence (effective voting power) creates a natural economic incentive for stake distribution without arbitrary thresholds.
- Bounded Impact: With the protocol’s 10% voting power cap per validator, the maximum enforced minimum commission under this rule is 10%, ensuring predictability and preventing excessive commission forcing.
- Simplicity: The change requires only a minor adjustment in epoch-boundary logic and imposes negligible computational overhead.
- Non-punitive: Validators can always set a higher commission proactively; the rule only corrects excessively low rates that contribute to centralization.
- Community consensus: Extensive discussion showed strong validator support (~45% of surveyed stake weight fully supported this VP%-based model). The IOTA Foundation/Protocol Research team has expressed support for this lightweight approach as an effective initial guardrail.
This design preserves delegator choice while gently nudging the system toward healthier stake distribution.
Backwards Compatibility
The proposal is fully backwards compatible. No existing functionality is removed, and all current validator configurations remain valid. Validators with set commissions already ≥ their effective VP% are unaffected. Those with lower commissions will see their effective rate increased for the epoch (up to a maximum of 10%), but delegators are not penalized retroactively, and validators retain full control over future settings.
No hard fork is required.
Test Cases
Test cases should verify:
- A validator with 8% effective VP and 0% set commission has effective commission forced to 8%.
- A validator with 10% effective VP (at cap) and 5% set commission has effective commission forced to 10%.
- A validator with 10% effective VP and 12% set commission retains 12% effective commission.
- A validator with potential uncapped stake >10% but effective VP capped at 10% has minimum commission enforced at most 10%.
- VP% calculations correctly handle edge cases (e.g., very small validators with VP% < 0.01%).
Reference Implementation
No full implementation is provided yet, as the change is minimal. It consists of adding the max() enforcement in the epoch transition logic where staking rewards and performance factors are computed, using the already-capped effective voting power values.
Security Considerations
The change introduces no new attack vectors. All calculations use existing, audited stake accounting and voting power capping mechanisms. By reducing incentives for voting power concentration, it strengthens resistance to centralization-based attacks. No new privileges or state mutations are added.
Copyright
Copyright and related rights waived via CC0.
IIP-9 Abstract IOTA Accounts
iip: 9 title: Abstract IOTA Accounts description: Abstract accounts on IOTA enable smart-contract-based authentication of addresses. author: Mirko Zichichi (@miker83z), Valerii Reutov (@valeriyr) , Levente Pap (@lzpap) , discussions-to: https://github.com/iotaledger/IIPs/discussions/35 status: Draft type: Standards Track layer: Core created: 2026-02-11 requires: IIP-0010 Abstract
This proposal defines a new account type for the IOTA protocol: the Abstract IOTA (AI) Account. An AI Account features a stable on-chain identifier and programmable authentication logic, enabling smart-contract-based verification in place of traditional private key signatures.
From the perspective of decentralized applications (dApps) and their users, AI Accounts function identically to traditional Externally Owned Accounts (EOAs). The primary objective is to enable flexible authentication mechanisms while ensuring compatibility with existing infrastructure and maintaining consistent behavior at the protocol level.
Motivation
The primary aim of this proposal is to significantly improve user experience throughout the IOTA ecosystem by supporting a diverse range of account authentication methods. This initiative enables extensible user authentication while preserving the integrity of the base protocol and ensuring that backward compatibility is not compromised. Under this model, accounts can operate and authenticate without relying on private keys, instead utilizing external inputs or executable code.
Authentication paradigms enabled by this proposal include, but are not limited to:
- Arbitrary cryptographic authentication
- Two-factor authentication (2FA)
- Dynamic multi-signature schemes
- Key rotation and recovery options
- Redundant signing methods
- DAO governance and treasury management
- Developer team or admin accounts for secure contract management
Specification
This section presents the technical specification for implementing an Account Abstraction model within the IOTA protocol. The Abstract IOTA (AI) Account is a new account type designed to support flexible, programmable authentication while maintaining full compatibility with existing protocol components.
At a high level, the mechanism works as follows:
- A third-party developer publishes a Move package containing a custom
AuthenticatorFunctionannotated with#[authenticator].- An AI Account object is created on-chain and linked to this
AuthenticatorFunctionvia anAuthenticatorFunctionRef.- When a transaction is submitted with the AI Account as sender, the protocol invokes the linked
AuthenticatorFunction— passing the proof data from theMoveAuthenticatorsignature field — instead of performing traditional signature verification.- If the
AuthenticatorFunctionexecutes successfully, the transaction is considered authenticated; if it aborts, the transaction is rejected.Requirements
The proposed Account Abstraction model must adhere to the following constraints:
- Stable Address: Each AI Account must be associated with a stable and persistent address, enabling long-term reference and asset ownership.
- Interoperability: AI Accounts must be capable of interacting seamlessly with any on-chain smart contract, without being restricted to specific protocols, types, or interfaces.
- True Abstraction: dApps must be able to craft and propose transactions to user wallets without distinguishing between AI Accounts and EOAs. The transaction structure and interaction flow should remain identical, such that the dApp is agnostic to the account type it is interacting with.
- Identifier Uniqueness: Each AI Account must be uniquely identifiable. The system must guarantee a one-to-one correspondence between account instances and their associated identifiers.
- Programmable Authentication: Authentication logic must be fully programmable, enabling third-party developers to define and deploy custom authentication schemes tailored to specific application needs.
- Backward Compatibility: The system must preserve the functionality of existing EOAs, including their native signature-based authentication mechanisms.
- Open Account Creation: The creation of AI Accounts must be permissionless, i.e., any entity should be able to initialize an AI Account on behalf of another, provided the intended owner can later authenticate themselves.
- API Uniformity: Developer-facing APIs for querying balances, initiating transfers, and similar operations must behave identically for both AI Accounts and EOAs, promoting transparency and ease of integration.
- Transaction Format Consistency: The format and semantic structure of transaction payloads must remain unchanged. The protocol must handle TransactionData for AI Account transactions identically to legacy EOAs.
- Gas Cost Parity: Current IOTA protocol static signature verification methods and Move-based verification counterparts should be cost-equivalent in terms of gas.
Protocol Design
To support AI Accounts, we propose the addition of a new set of modules within the
iota-framework. These modules define the core data structures and their associated methods for creating and interacting with an AI Account.Account Representation and Addressing
AI Accounts are represented as objects within the IOTA Move framework, each associated with a globally unique 32-byte
ObjectID. TheObjectIDserves as the AI Account Identifier and is itself a validIotaAddress. Ownership of on-chain objects by AI Accounts is expressed using the existingAddressOwner(IotaAddress)variant, where theIotaAddressis the AI Account Identifier.Background: IOTA Ownership Model
Every object in the IOTA Protocol has a well-defined owner. Only the owner can use their objects as input to a transaction. The current ownership semantics are as follows:
AddressOwner(IotaAddress)
AddressOwner(IotaAddress=PubKey-derived)– Object owned by a single EOA; it can be set as input of a transaction if the EOA provides a valid signature using the private key associated with the public key from which theIotaAddresswas derived.AddressOwner(IotaAddress=ObjectID)– The owner is anIotaAddresswhich is interpreted as anObjectID. Objects are, in essence, owning other objects. To unlock such owned objects, they need to be received in a transaction the first time they are accessed after being transferred.ObjectOwner(IotaAddress=ObjectID)– Object owned by another object, in a hierarchical, parent-child relationship. The owned object can be dynamically accessed in transactions where the parent is used as input. This is used for Dynamic Fields.SharedOwner– Mutably accessible by any address; it can be set as input of a transaction with no checks.ImmutableOwner– Immutably accessible by any address; it can be set as input of a transaction with no checks.Rationale for
AddressOwnerWhile an ownership of type
ObjectOwnermight appear suitable for an AI Account, since it is indeed an object, it would require additional steps and access control logic when objects are transferred to it. Specifically, if anObjectOwnerownership relationship were established, the sender of an object would be required to use the AI Account object as input of the transfer transaction; moreover, mutable access to the AI Account object would also be necessary.Employing an
AddressOwnerownership relation, instead, allows the sender to simply use the AI Account Identifier as the address. This choice renders AI Accounts indistinguishable from EOA addresses at the ownership level.Objects transferred to an AI Account Identifier do not require explicit acceptance by the receiver. When set as inputs of a transaction, their “unlocking” is performed by verifying that the AI Account Identifier matches the sender of the transaction, thereby ensuring that the account is authenticated.
Figure 1 illustrates how the
AddressOwner(IotaAddress)ownership variant applies to both EOA and AI Accounts. As shown, the external interface is identical, but the underlying derivation and verification mechanisms differ at the protocol level.
The following table summarizes the key differences:
Aspect EOA Abstract Account How is the address of the account derived? 0xffffaddress derived from the EOA public key.0xabcis theObjectIDof the object representing the account.IotaAddress = AIAccountID = ObjectIDOwned object that the transaction wants to unlock (input) 0x123with typeCoin<IOTA>, owner is0xffff.0x123with typeCoin<IOTA>, owner is0xabc.Sender field of the transaction 0xffff0xabcSignature field of the transaction Payload containing a valid signature created using the private key (e.g., ED25519). Payload containing bytes created by logic unknown to the IOTA protocol, but verifiable by a Move AuthenticatorFunctionthat was arbitrarily created by a third-party developer and dynamically linked to the AI Account Identifier.Move Authenticator
As seen above, AI Accounts are capable of initiating transactions, i.e., the AI Account Identifier can be used as sender of transactions. However, unlike EOAs, whose authentication derives from private key signatures, the AI Account Identifier is not necessarily tied to a specific cryptographic keypair, nor derived from one. Therefore, a new authentication flow is introduced into the protocol to allow the implementation of arbitrary authentication mechanisms in Move: the
MoveAuthenticator.The
MoveAuthenticatoris a protocol-level transaction signature variant that allows transaction senders to submit a vector of arguments in the transaction signature field that are passed to theAuthenticatorFunctionof the AI Account. This enables third-party developers to implement custom programmable authentication schemes using Move.Note
Digression on the structure of IOTA transactions
Currently, the structure of IOTA transactions includes:
TransactionData: Contains core transaction payload such as Programmable Transaction Blocks (PTB) commands and gas configuration.GenericSignatures: A vector of protocol-supported authenticators, currently includingMultiSig,Signature,ZkLoginAuthenticator (disabled), andPasskeyAuthenticator.The proposed addition of
MoveAuthenticatorextends the set ofGenericSignaturevariants to support dynamic authentication logic. With this extension, third-party developers can deploy Move packages implementing a customAuthenticatorFunction. An AI Account can then be configured to delegate its authentication to such a package. Inputs to theAuthenticatorFunctionare encoded asVec<CallArg>(whereCallArgis a type defining pure and object arguments) and provided in theMoveAuthenticatorsignature field. The function either completes successfully or aborts with an error if authentication fails.Figure 2 compares the authentication flow for transactions issued by EOA and AI Accounts. The key distinction is that EOAs rely on cryptographic signature verification at the protocol level, whereas AI Accounts delegate verification to a developer-defined
AuthenticatorFunctionexecuted in Move.
The following subsections describe each authentication flow in detail.
(Traditional) EOA Transaction Authentication
TransactionDatais the payload that makes transactions cryptographically unique. It includes sender, inputs, commands and gas data. Passing the raw bytes of the payload to a hash function yields its digest, i.e.,TransactionDigest, which becomes the transaction identifier (encoded in base58).An EOA user signs the
TransactionDigestwith the EOA private key (the description is simplified for the sake of readability). This signature becomes the payload of theGenericSignaturepart of the transaction. Once transmitted to a validator, the transaction is authenticated by extracting the sender address and transaction digest from theTransactionDataand verifying them against the signature. If verification fails, the transaction is marked as invalid.Abstract Account Transaction Authentication
In the case of AI Accounts, the
TransactionDatapart is exactly the same as the EOA case; it necessitates no changes. The difference lies in theGenericSignaturepart. A developer-defined mechanism can be used (e.g., a different signature scheme, passkey method or business logic) to generate a proof on the client side that can be validated by theAuthenticatorFunctionin Move.This proof (or set of proofs) becomes the
MoveAuthenticatorpayload of theGenericSignaturefield of the transaction. Validators authenticate the transaction by executing the account’s dynamically linkedAuthenticatorFunctionMove function with the proof provided as input argument(s). It is important to note that theAuthenticatorFunctionhas a “rich” context: it has access to theTxContextcontained in theTransactionData, i.e., it knows theTransactionDigest, and also accesses the inputs and commands ofTransactionDatausing anAuthContext. ThisAuthContextstruct enables parsing of the PTB information included in theTransactionData.Should the execution of
AuthenticatorFunctionfail for any reason, the transaction is marked as invalid.The AI Account Interface
The AI Account representation in Move is designed such that third-party developers can implement any arbitrary type. There is no single AI Account framework object type; rather, any object type can become an AI Account provided it implements the required interface.
A Move type is considered an abstract account if, and only if:
- It is an object (i.e., a struct type that has key ability and an
id: UIDfield),- has a dynamic field with key of type
0x2::account::AuthenticatorFunctionRefV1Keyand value of type0x2::authenticator_function::AuthenticatorFunctionRef.The type
0x2::account::AuthenticatorFunctionRefcontains the fields necessary to uniquely identify an on-chainAuthenticatorFunctiondefined by an external package. For version 1:public struct AuthenticatorFunctionRefV1<phantom Account: key> has copy, drop, store { package: ID, module_name: ascii::String, function_name: ascii::String, }Figure 3 illustrates a concrete example in which a custom Authenticator Move Package is deployed and linked to an AI Account to provide authentication:
In the example above, we imagine a scenario where a developer independently develops the Custom Authenticator package deployed at the address
0x789(i.e., package id). This package defines the account interface through a module namedcustom_auth. In this module a0x789::custom_auth::CustomAccountMove object type is defined, which represents a specific implementation of an AI Account. TheCustomAccountuses the fieldauth_helpersto store data on-chain; the logic governing this account is arbitrarily defined by the developer.The only field required to make
CustomAccountan AI Account is a dynamic field using the0x2::account::AuthenticatorFunctionRefV1Keykey and0x2::authenticator_function::AuthenticatorFunctionRefvalue. This dynamic field entry can only be created through a framework method in the0x2::accountmodule, namelycreate_account_v1.The object with id
0xabc, once becoming an AI Account, has an authentication method clearly defined by the “attached”AuthenticatorFunctionRef. This struct references a specific function within the category of Move functions known asAuthenticatorFunction. In the example, it references the0x789::custom_auth::authenticatefunction defined in the same module.The Authenticator Function
The
AuthenticatorFunctionis designed to be as generic as possible, allowing third-party developers to implement arbitrary authentication logic. For instance, in the above example, theCustomAccountobject’sauth_helpersfield can be used within theAuthenticatorFunction. This field contains on-chain data that may have been modified by other parties prior to authentication. In general, anAuthenticatorFunctionreceives inputs and can read from the ledger state to allow or reject access to the AI Account. Thevec<CallArg>passed through theMoveAuthenticatorpayload of theGenericSignaturepart of the transaction is converted into function parameters similarly to what happens today for PTBs.An
AuthenticatorFunctionMUST satisfy the following rules:
Visibility: The function MUST be declared as a
publicnon-entryfunction.Read-only inputs: All inputs MUST be read-only. Accepted input types are pure types (integers, strings, etc.) and read-only references to objects. Owned Objects MUST NOT be passed as input; only Shared Objects and Immutable Objects are permitted.
First parameter — Account reference: The first parameter MUST be a reference to the same Move type as the AI Account being authenticated. The object ID of the argument passed for this parameter MUST be exactly equal to the AI Account Identifier (i.e., the sender of the transaction).
No return type: The function MUST NOT define a return type. Authentication succeeds if the function completes execution without error and fails if the function aborts.
Context parameters: The second-to-last parameter MUST be
&AuthContextand the last parameter MUST be&TxContext. TheAuthContextstruct exposes the underlying transaction fields (PTB inputs and commands), whileTxContextexposes the transaction digest, gas parameters, and sponsor details. These context values are not created by the user; the protocol automatically creates and injects them before execution. The presence ofAuthContextensures that anAuthenticatorFunctioncannot be invoked from within Move by other functions.Execution Lifecycle
From the protocol point of view, the
AuthenticatorFunctionis invoked twice during the transaction lifecycle:
- Optimistic Pre-Consensus Authentication:
- Upon receiving a transaction, all validators execute the
AuthenticatorFunctionrelated to the TX’s sender account with any shared object read-only reference or pure inputs.- If the pre-consensus authentication passes, then the TX is treated as any other, i.e., the TX’s owned objects (gas payment objects and the TX inputs) are locked and the validator signature needed for a certificate is returned.
- In this phase, no state modifications occur, i.e., no gas is consumed for the execution of the authentication.
- Post-Consensus Authentication Execution:
- Once the transaction is ready for the post-consensus execution, the
AuthenticatorFunctionis executed for the second time immediately before the normal execution.- The gas object versions for the gas payment are guaranteed for post-consensus execution since a majority of validators locked those gas objects.
- In this second phase,
AuthenticatorFunctionexecution can still fail due to changes in the state of input shared objects. In this case the authentication gas cost is deducted, but no other state changes are committed to the ledger and the transaction execution result is ABORTED.- Gas costs for authentication in non-sponsored transactions are deducted from an AI Account’s gas object. Sponsored transactions cover authentication costs via gas payment objects owned by the sponsor.
Creating AuthenticatorFunctionRef
The
AuthenticatorFunctionRef, as described above, is a struct that uniquely identifies a function within a package deployed on-chain. Its creation is enabled via the Package Metadata Standard (see IIP-0010).Figure 4 shows how the
PackageMetadataobject is used to validate and create anAuthenticatorFunctionRefduring package publication.
Continuing with the same example from the previous subsection, consider the
CustomAuthenticatorpackage. When this package is published to the ledger, an associatedPackageMetadataimmutable object is created. This immutable object contains metadata for each module, including functions found in the0x789::custom_authmodule. Specifically, the0x789::custom_auth::authenticatefunction is designated as an “authenticator” within thePackageMetadataobject. To be designated as an authenticator, a function MUST follow the rules listed in the previous subsection and MUST be annotated with the#[authenticator]function attribute.The
PackageMetadataobject acts as a source of validation. This object is created by the protocol during package publication. If an#[authenticator]attribute is found for a function, the protocol validates that function using theiota-move-verifier. SincePackageMetadatais an immutable object that can only be created by the protocol, it cannot be forged with an unauthorized authenticator function.Given the trusted nature of the
PackageMetadata, anAuthenticatorFunctionRefcan be created by passing aPackageMetadataobject as input. The function0x2::authenticator_function::create_auth_function_reftakes aPackageMetadata, a module name, and a function name, then verifies whether these inputs resolve to a valid authenticator function; if so, it returns anAuthenticatorFunctionRef. This reference is then used to create an account.Rationale
This specification introduces a flexible, developer-centric model for account abstraction that preserves compatibility with the existing transaction format and authorization model. By leveraging Move’s programmability and object-oriented design, the system supports a wide range of use cases—from key rotation and passwordless login to DAO-based access control and cross-device passkey authentication.
Alternative models—such as static multisignature schemes or fixed key lists—were deemed insufficient due to their lack of adaptability to dynamic and composable use cases. In contrast, the AI Account model integrates tightly with the Move-based models for asset management.
Similar models have emerged in other blockchain ecosystems, such as Ethereum’s ERC-4337 and Aptos’ dynamic dispatch system. However, this proposal is uniquely tailored to the IOTA protocol’s architecture, emphasizing on-chain object ownership, deterministic addressing, and native MoveVM integration.
Backwards Compatibility
This proposal is designed to be fully backward compatible with the existing IOTA protocol. The following areas are affected:
Area Impact Transaction Format No changes are made to the TransactionDatastructure.Signature Support A new MoveAuthenticatorvariant is added alongside the existing variants inGenericSignature. Existing signature types remain unaffected.Validation Pipeline An optional AuthenticatorFunctionhook is introduced for transaction validation and pre-PTB execution. The existing validation pipeline is not modified.RPC and Wallet APIs RPC APIs remain unchanged. AI Accounts are fully compatible with existing APIs and indistinguishable from EOAs. However, wallets that wish to support the new account type MUST add support for each authenticator type defined on-chain to derive the bytes supplied to the MoveAuthenticator.Test Cases
The following test cases SHOULD be implemented to validate the correctness and completeness of the AI Account model:
Test Case Description Reference Move Implementation Single-key EOA replication An AI Account that replicates standard single-key cryptographic authentication, including key rotation capability. https://github.com/iotaledger/iota/tree/develop/examples/move/iotaccount Dynamic multisig An AI Account authenticated via dynamic multisignature logic based on signatures or on-chain capabilities. https://github.com/lzpap/isafe/tree/main/contracts/isafe Function-Call keys An AI Account authenticated using a method similar to Near’s Function-Call keys, limiting access only to the execution of some methods for an account. https://github.com/iotaledger/iota/tree/develop/examples/move/function_keys Spending limit An AI Account where several users are authorized to access but having a specific allowance of IOTA conis set for each one. https://github.com/iotaledger/iota/tree/develop/examples/move/spending_limit Third-party authentication An AI Account authenticated using an external or third-party verification mechanism, such as ZK proofs validation. https://github.com/iotaledger/iota/pull/10227 Time locked An AI Account where the authentication is constrained by time. https://github.com/iotaledger/iota/tree/develop/examples/move/time_locked Reference Implementation
Main PR against the develop branch: https://github.com/iotaledger/iota/pull/9586
References to Account Abstraction projects in Web3
- ERC-4337: Account Abstraction Using Alt Mempool
- account.tech
- Aptos Account Abstraction
- Sui Account Abstraction Feature Request
Copyright
Copyright and related rights waived via CC0.
IIP-10 Package Metadata
iip: 10 title: Package Metadata description: Immutable on-chain object that provides trusted metadata about Move packages during execution. author: Mirko Zichichi (@miker83z), Valerii Reutov (@valeriyr) discussions-to: https://github.com/iotaledger/IIPs/discussions/36 status: Draft type: Standards Track layer: Core created: 2026-02-17 requires: None Abstract
PackageMetadatais an immutable on-chain object that provides trusted metadata about Move packages during execution. BecausePackageMetadataobjects are created exclusively by the protocol during publish and upgrade operations, Move code can read this metadata with full confidence in its authenticity. This enables on-chain verification of package properties without relying on user-provided claims. This mechanism allows Move modules to introspect package capabilities, verify function signatures, and make decisions based on protocol-attested information.Motivation
Move execution might require knowledge about external packages or the same package being used: What functions does a package expose? What capabilities does it claim? Is a given function a valid authenticator?
Traditionally, answering these questions required either:
- Trusting user input:
- accepting claims about packages without verification
- drawback: user input can be malicious
- Hardcoding knowledge:
- embedding package-specific logic in modules
- drawback: hardcoding doesn’t scale
- Off-chain verification:
- checking properties before transaction submission
- drawback: user input can be malicious
PackageMetadatasolves this by providing protocol-attested package introspection. Because only the protocol can createPackageMetadataobjects (during publish/upgrade), and because these objects are immutable, Move code can trust their contents completely. This enables:
- On-chain capability discovery: Modules can query what a package provides.
- Dynamic integration: Modules can work with packages they were not compiled against (at the metadata level).
- Protocol-enforced properties: Metadata reflects verified attributes; no need to trust user claims about packages.
Possible use cases exploiting
PackageMetadatacould be:
- Account Abstraction (planned) (see IIP-0009): Move code reads
PackageMetadatato verify that a function is a valid authenticator and to obtain the account type it authenticates. This enables theaccountmodule to createAuthenticatorInfoV1instances that reference verified authenticator functions.- View Functions (planned) (see IIP-0005): Modules and clients can discover which functions are safe to call without state changes.
- Capability Verification: Modules can verify package capabilities before granting access.
- Function modifiers: Modules can parse functions of any package to check whether they are entry, private, etc.
Specification
In this section, we present the technical specification for implementing an Package Metadata model version 1 within the IOTA protocol. The specification begins by outlining a set of functional requirements the model must satisfy, followed by a high-level overview of the proposed architectural approach. Finally, the main set of Move type interfaces will be provided as standard for the first version of this model.
Requirements
The proposed Package Metadata model must adhere to the following constraints:
- Protocol-only creation:
PackageMetadataobjects can only be created by the protocol during publish or upgrade execution. There is no public constructor or creation function exposed to Move code. This guarantees that:
- All
PackageMetadatacontent is derived from verified bytecode- Users cannot forge or tamper with metadata
- Move code can trust metadata without additional verification
- Immutability:
PackageMetadataobjects are frozen immediately upon creation.- Conditional Creation:
PackageMetadatais created only when meaningful metadata exists, e.g., at least one recognized attribute must be present in a module of the package. Packages without attributes have noPackageMetadataobject.- Deterministic
PackageMetadataid derivation: Given any package id, the correspondingPackageMetadataobject id can be computed using the derived object mechanism (same as dynamic fields id derivation). See https://docs.sui.io/guides/developer/objects/derived-objects. Move code can compute this derivation on-chain.High-Level Overview
To support
PackageMetadata, we propose to modify part of the Move compilation, part of the publish/upgrade execution and the addition of a new module to the iota-framework.In the following, we are going to use the usage of Package Metadata within the IOTA Account Abstraction model (see IIP-0009), because that is a concrete use of the standard.
┌─────────────────────────────────────────────────────────────────┐ │ 1. COMPILATION (Developer Machine) │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ #[authenticator] ┌──────────────────────┐ │ │ public fun authenticate(..) │ RuntimeModuleMetadata│ │ │ │ │ embedded in bytecode │ │ │ └─────────────────────▶│ │ │ │ └──────────────────────┘ │ └─────────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────┐ │ 2. PUBLISH/UPGRADE (Protocol Execution) │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ ┌─────────────────────┐ ┌────────────────────────────┐ │ │ │ Extract metadata │───▶│ Verify each attribute │ │ │ │ from bytecode │ │ (authenticator sig check) │ │ │ └─────────────────────┘ └────────────────────────────┘ │ │ │ │ │ │ │ ┌────────────────────┘ │ │ ▼ ▼ │ │ ┌─────────────────────────────────────────┐ │ │ │ PROTOCOL creates PackageMetadataV1 │ │ │ │ - Populates from verified attributes │ │ │ │ - Derives object ID from package ID │ │ │ │ - Freezes object (immutable) │ │ │ └─────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────┐ │ 3. RUNTIME (Move VM Execution) │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ // Any Move code can read and trust this metadata │ │ │ │ public fun create_authenticator_info( │ │ metadata: &PackageMetadataV1, // Protocol-created │ │ module_name: String, │ │ function_name: String, │ │ ): AuthenticatorInfoV1 { │ │ // Safe: metadata is protocol-verified │ │ let auth = metadata.get_authenticator(module_name, fn); │ │ // auth.account_type is TRUSTWORTHY │ │ AuthenticatorInfoV1 { ... } │ │ } │ └─────────────────────────────────────────────────────────────────┘Compilation and building phase
The process begins on the developer’s machine during package compilation. When the Move compiler encounters a function annotated with a recognized attribute (such as
#[authenticator]), it records this information in the function’s metadata. The compiler performs initial syntax validation, ensuring the attribute is well-formed and applied to an appropriate element, but does not verify semantic correctness (e.g., whether the function signature actually satisfies authenticator requirements).During the build phase, the collected attribute information is serialized into a
RuntimeModuleMetadatastructure and embedded directly into the module’s bytecode. Once all attributes for a module are collected, theRuntimeModuleMetadataV1is wrapped in aRuntimeModuleMetadataWrapper(which includes a version number) and serialized to BCS bytes. These bytes are then pushed into the module’s bytecode metadata vector using a dedicated key.The
IOTA_METADATA_KEYis a protocol-defined constant that acts as a reserved namespace. While the bytecode format allows arbitrary metadata entries, the protocol’s verifier enforces strict rules:
- Single Entry: A module may have at most one metadata entry with the
IOTA_METADATA_KEY- Valid Structure: The bytes must deserialize to a valid
RuntimeModuleMetadataWrapper- Verified Content: Each attribute within the metadata must pass its corresponding verifier
Finally, the metadata will travels with the bytecode through the publish transaction, ensuring the protocol has access to the original annotations.
Publish/Upgrade phase
When a publish or upgrade transaction is executed, the protocol takes over. This phase is critical because it establishes the trust boundary: everything that happens here is performed by the protocol itself, not by user code.
Verification
During publish or upgrade, the protocol extracts
RuntimeModuleMetadatafrom each module’s bytecode using this theIOTA_METADATA_KEY, deserializes it, and verifies each attribute. For every attribute found, the protocol invokes the corresponding verifier. For authenticator attributes, for instance, this means callingverify_authenticate_func_v1(), which checks that the function has the correct visibility, parameter types, and return type (see IIP-0009).If any verification fails, the entire publish or upgrade transaction fails. User code cannot write to the
IOTA_METADATA_KEYslot in a way that would bypass verification, because the verifier runs before execution completes, and any invalid metadata causes the transaction to fail.Object creation
Once all attributes are verified, the protocol constructs the
PackageMetadataV1object. For each module containing verified attributes, it creates aModuleMetadataV1entry. For authenticator attributes specifically, it extracts the first parameter’s type from the verified function signature, this becomes theaccount_typefield, representing which object type this authenticator can authenticate.Finally, the protocol creates the
PackageMetadataV1object and immediately freezes it, making it immutable. The object is stored on-chain with no owner (immutable objects have no owner), ensuring it cannot be modified or deleted.Object id derivation
During the creation, the protocol derives the metadata object’s ID deterministically from the package’s storage ID, reusing the dynamic field address derivation logic:
#![allow(unused)] fn main() { package_metadata_id = derive_object_id(package_storage_id, <0x2::package_metadata::PackageMetadataKey>, {/* dummy bool */}) = derive_dynamic_field_id(package_storage_id, <0x2::derived_object::DerivedObjectKey<0x2::package_metadata::PackageMetadataKey>>, {/* dummy bool */}) = Blake2b256( HashingIntentScope::ChildObjectId /* 0xF0 */ || package_storage_id || len({/* dummy bool */}) || {/* dummy bool */} || bcs(<0x2::derived_object::DerivedObjectKey<0x2::package_metadata::PackageMetadataKey>>) ) }Where:
package_storage_id- The object ID of the package (treated as the “parent”)<0x2::derived_object::DerivedObjectKey<0x2::package_metadata::PackageMetadataKey>>- The full type tag wrapping the key type, i.e.,<0x2::package_metadata::PackageMetadataKey>, (which can be arbitrary for the derived object mechanism) in the<0x2::derived_object::DerivedObjectKey<T>>type (which is hardcoded in this mechanism).{/* dummy bool */}- The key bytes, which in the case of PackageMetadataKey contains only a dummy bool field.bcs()- The BCS serialization of a key type.Blake2b256()- The hashing function used for the ID derivationHashingIntentScope::ChildObjectId- The flag used to avoid hash collisions. Hardcoded value of 240 (or 0xF0).||- ConcatenationThis derivation ensures that given any package ID, the corresponding metadata ID can always be computed without an on-chain lookup.
Runtime phase
After a successful publish/upgrade, any Move code can read the
PackageMetadataobject by borrowing it as an immutable reference. For example, when the Account Abstraction framework needs to create anAuthenticatorFunctionRefV1for an account, it reads the relevantPackageMetadataV1object, looks up the authenticator by module and function name, and extracts theaccount_type. This type information is trustworthy because it was extracted from verified bytecode by the protocol, i.e., not provided by user input or claimed by the package developer.public fun create_auth_function_ref_v1<Account: key>( package_metadata: &PackageMetadataV1, module_name: ascii::String, function_name: ascii::String, ): AuthenticatorFunctionRefV1<Account> { // TRUST: metadata was created by protocol, not user let authenticator_metadata = package_metadata .modules_metadata_v1( &module_name, ) .authenticator_metadata_v1(&function_name); // TRUST: account_type was extracted from VERIFIED bytecode assert!( type_name::get<Account>() == authenticator_metadata.account_type(), EAuthenticatorFunctionRefV1NotCompatibleWithAccount, ); AuthenticatorFunctionRefV1 { package: package_metadata.storage_id(), module_name, function_name, } }Move Types and Methods Specification
Main Types:
/// Key type for deriving the package metadata object address public struct PackageMetadataKey has copy, drop, store {} /// Represents the metadata of a Move package. This includes information /// such as the storage ID, runtime ID, version, and metadata for the /// functions contained within the package. public struct PackageMetadataV1 has key { id: UID, /// Storage ID of the package represented by this metadata /// The object id of the runtime package metadata object is derived from /// this value. storage_id: ID, /// Runtime ID of the package represented by this metadata. Runtime ID is /// the Storage ID of the first version of a package. runtime_id: ID, /// Version of the package represented by this metadata package_version: u64, // Handles to internal package modules modules_metadata: VecMap<ascii::String, ModuleMetadataV1>, } /// Represents metadata associated with a module in the package. /// V1 includes only the authenticator functions information. public struct ModuleMetadataV1 has copy, drop, store { authenticator_metadata: vector<AuthenticatorMetadataV1>, } /// Represents metadata for an authenticator within the package. /// It includes the name of the authenticate function and the TypeName /// of the first parameter (i.e., the account object type). public struct AuthenticatorMetadataV1 has copy, drop, store { function_name: ascii::String, account_type: TypeName, }Key Accessor Functions:
/// Return the version of the package represented by this metadata public fun package_version(metadata: &PackageMetadataV1): u64 { } /// Safely get the module metadata list of the package represented by this metadata public fun try_get_modules_metadata_v1( self: &PackageMetadataV1, module_name: &ascii::String, ): Option<ModuleMetadataV1> { } /// Borrow the module metadata list of the package represented by this metadata. /// Aborts if the module is not found. public fun modules_metadata_v1( self: &PackageMetadataV1, module_name: &ascii::String, ): &ModuleMetadataV1 { } /// Safely get the `AuthenticatorMetadataV1` associated with the specified /// `function_name` within the module metadata. public fun try_get_authenticator_metadata_v1( self: &ModuleMetadataV1, function_name: &ascii::String, ): Option<AuthenticatorMetadataV1> { } /// Borrow the `AuthenticatorMetadataV1` associated with the specified /// `function_name`. /// Aborts if the authenticator metadata is not found for that function. public fun authenticator_metadata_v1( self: &ModuleMetadataV1, function_name: &ascii::String, ): &AuthenticatorMetadataV1 { } /// Return the account type of the authenticator represented by this metadata public fun account_type(self: &AuthenticatorMetadataV1): TypeName { }Rationale
When the protocol creates metadata, it does so by extracting information from verified bytecode, not from user claims or developer assertions. This means Move code reading
PackageMetadatacan trust its contents implicitly: if the metadata says a function is a valid authenticator with a specific account type, that fact has been verified by the protocol during publish.
PackageMetadatais frozen immediately upon creation because the information it represents, i.e., a package metadata, is itself immutable. Once a package is published, its bytecode cannot change, so metadata derived from that bytecode should not change either. If a package is upgraded, then a newPackageMetadataobject dedicated to the new version is created. Moreover,PackageMetadataobjects are only created when a package contains at least one recognized attribute.Computing metadata IDs deterministically from package IDs means that any code, on-chain Move or off-chain tooling, can calculate a package metadata ID without performing a lookup. This eliminates the need to store the mapping explicitly and ensures the relationship between package and metadata is inherent rather than recorded.
Backwards Compatibility
- Existing Packages: For packages published before
PackageMetadataintroduction no metadata object exists and these packages continue to function normally.- Package Upgrades: Each upgrade creates a new
PackageMetadatafor that version, but old metadata objects remain always valid and accessible. package_version distinguishes between versions and runtime_id links all versions to the original package.- Adding New Attributes and Fields to the Model: New attributes can be added without breaking existing code adding a variant to
IotaAttributeenum or a field toModuleMetadataand increase thePackageMetadataversion, i.e.,PackageMetadataV2,V3, etc. Existing metadata continues to work.Test Cases
- Abstracted IOTA Accounts Authenticator Functions.
- Move View Functions
Reference Implementation
Main PR against the develop branch: https://github.com/iotaledger/iota/pull/9586. See IIP-0009.
Questions and Open Issues
- Metadata for Non-Attributed Packages: Should minimal metadata be created for all packages (e.g., just IDs and version)?
- Cross-Package Queries: Should Move code be able to query metadata for arbitrary packages, or only those passed as arguments?
- Metadata Expiration: Should old package version metadata eventually be prunable?
Future Work
Planned Attributes
Attribute Purpose Metadata Fields #[authenticator]Account authentication function_name, account_type #[view]Read-only functions function_name, return_type Tooling
- CLI:
iota package metadata <package-id>- GraphQL: Package metadata queries
- Explorer: Metadata visualization
Copyright
Copyright and related rights waived via CC0.