Skip to main content

Clack is Joint Field Chief Editor for the research journal Frontiers in Blockchain. Recent publications from Frontiers in Blockchain are given below:


  • Research Profile

    Research Profile

    On 24th October 2009 Christopher D. Clack was awarded the Doctor of Science (ScD) degree from the University of Cambridge - the university’s highest degree, awarded for distinction in the advancement of science, and conferred on scientists "with a proven record of internationally recognised scholarship, including substantial and sustained contributions to scientific knowledge".

    This award recognises Clack's contribution to Computer Science in a research career at UCL that started in 1984 with the development of the world's first parallel graph reduction computer system made from stock hardware. Since then, Clack's research has covered four areas:

    • Functional Programming;
    • Genetic Computation;
    • Agent-Based Simulation; and
    • Financial Computing

    Distinguishing features of Clack's research have been his ability to engage with scientists of other disciplines, to explore how research developments in Computer Science can benefit science more broadly, and to validate the impact of his research via engagement with industry.

    In 2009 Clack secured funding from industry, the Economic and Social Research Council, the Natural Environment Research Council, and the Technology Strategy Board to launch a national Knowledge Transfer Network for financial services.

  • The InterDyne Simulator (2011-)

    The InterDyne Simulator (2011-)

    The InterDyne Simulator is designed to support exploration of interaction dynamics and feedback effects in non-smooth complex systems. It is a general-purpose tool that can be applied to a wide variety of systems, though at UCL its primary use has been to simulate interaction dynamics in the financial markets.

    • InterDyne is a discrete-time simulator, and simulations proceed for a stated number of time steps. The mapping of a simulator time step to a period of real time is a semantic issue for the designer of the experiment being run on InterDyne.

    • InterDyne is an intrinsically deterministic simulator - a simulation will always behave the same way every time it is run (unless the programmer expressly includes nondeterminism). This determinism greatly assists the understanding of the low-level interactions that cause complex behaviour, i.e. it facilitates determination of the causal pathway of a particular behaviour. Despite being intrinsically deterministic, Interdyne permits two types of non-determinism to be expressed - (i) the programmer may include non-deterministic (or pseudo-non-deterministic) elements in the code for a component; and (ii) where a simulation component receives many data items in one time step from many other components, Interdyne may be instructed to provide those data items either sorted according to the sender's identity or sorted pseudo-randomly - however, the pseudo-random behaviour will be identical each time the simulation is run. If it is desired to run a simulation multiple times, each time with a different behaviour, then the pseudo-random behaviour can be provided with a different seed on each run.
    • InterDyne interaction is effected via communication between components; InterDyne supports both one-to-one communication and one-to-many communication.
    • InterDyne supports the precise definition of communication topology between components, to determine which interactions (communications) are permitted and which are not. This facilitates the design and implementation of simulations; an InterDyne simulation is a directed graph where the nodes are components (such as a trader, a news feed, or an exchange) and the edges are communication links.
    • InterDyne supports the specification of a separate information delay for each possible interaction path defined in the communication topology; these delays are applied to both one-to-one communications and one-to-many communications.
    • InterDyne permits components to be modelled at differing levels of detail. For example, one component may represent a trading algorithm modelled in great detail including internal data structures, interacting with another component that is modelled as a simple function generating orders according to a required statistical distribution.
    • InterDyne simulations are programmed using a functional language - the most recent version uses Haskell. This facilitates rapid development of simulations, and permits such simulations to be viewed as executable specifications.
    • InterDyne simulations may be interpreted either as executable specifications or as agent-based simulations.
    • The primary output from an InterDyne simulation is a trace file, suitable for further analysis.

    Development of the InterDyne simulator

    Development of the InterDyne Simulator began in 2011, initially implemented using the functional language Miranda and then ported to Haskell. Several UCL students and researchers have been involved either in the development of InterDyne, or in using InterDyne to run experiments in Interaction Dynamics. [Learn more]

    InterDyne documents

    A draft InterDyne User Manual is available; this provides an introduction to the basic features of InterDyne. Other working papers and project dissertations related to InterDyne are also available. [Learn more]

  • InterDyne Simulator

    InterDyne Simulator

    Development of the InterDyne Simulator began in 2011, initially implemented using the functional language Miranda (link)(link to book).

    To improve performance and to take advantage of improved profiling tools, InterDyne has since been ported to the functional language Haskell (link). The Glasgow Haskell Compiler (GHC) generates optimised and efficient native code.

    Several UCL students and researchers have been involved either in the development of InterDyne, or in using InterDyne to run experiments in Interaction Dynamics. These have included (with apologies for any ommissions):

    • Elias Court, who made significant contributions to the Miranda version of InterDyne, and to our understanding of interaction dynamics between HFT market makers.
    • Richard Everett, who helped explore mechanisms for state-space analysis
    • Kyle Liu, who undertook the initial port of InterDyne from Miranda to Haskell
    • Dmitrijs Zaparanuks, who helped resolve initial problems with the Haskell port, and contributed greatly to the analysis and understanding of interaction dynamics between HFT market makers
    • Justin Moser, who undertook initial experiments to explore the HFT "front-running" claims made by Michael Lewis
    • Aman Chopra, who conducted a more detailed exploration of Lewis's HFT "front-running" claims and also helped to resolve some subtle problems with the Haskell port
    • Vikram Bakshi, who substantially improved InterDyne's internal infrastructure and contributed new agents and a FIX messaging engine (link)
    • Saagar Hemrajani, who has explored the impact of Reg NMS on Lewis's HFT "front-running" claims (link)
    • Florian Obst, who has developed a visualisation tool for InterDyne trace files, to assist in detecting emergent behaviour

    Prior to the development of InterDyne, other approaches to simulating Interaction Dynamics were explored. InterDyne occupies a "Goldilocks position" that neither models at a level of abstraction that is too high (eg probabilistic modelling) nor at a level that is too low (eg process modelling). Several student projects contributed to previous approaches, and these have included:

  • Interdyne Documents

    Interdyne Documents

    A draft InterDyne User Manual is available; this provides an introduction to the basic features of InterDyne:

    The following working paper illustrates how InterDyne's dynamic interaction model can be used to gain in-depth understanding of financial market instability:

    The InterDyne Simulator is not yet properly documented. However, several student projects at UCL have been based on either the development of the underlying technology or the use of the simulator, or both. Some related project dissertations are provided below:

    Vikram Bakshi implemented a fairly comprehensive FIX engine (so that components in a simulation of financial markets can interact via messages that contain information in a way that corresponds fairly closely to reality) and also implemented a component that simulates a subset of the behaviour of the BATS BYX exchange. Note however that although an exchange may include a FIX tag within one of its defined input or output messages it might only accept a subset of the qualifiers which are defined by FIX. Vikram therefore designed a mechanism by which exchange-specific messages can be defined, and can be tested for correctness by the compiler (using the type system). The mechanism for interaction with the BATS BYX exchange component is explained briefly in the following document: (here).

  • InterDyne Modelling

    InterDyne Modelling

    The InterDyne Simulator - Modelling

    An InterDyne model can be viewed either as an analytically-tractable executable specification or as an agent-based model. Taking the latter approach, the model consists of a number of agents communicating via the passing of messages. By defining in advance which agents may communicate with which other agents, a graph structure is created where the nodes are agents and the edges are permissible one-to-one communication paths. Agents may also send a one-to-many message to a broadcast channel; each agent declares in advance the broadcast channels to which it will listen.

    InterDyne is run by executing the function "sim" applied to appropriate arguments. The function sim has type sim :: Int -> [Arg_t] -> [(Agent_t, [Int])] -> IO() where the first argument is the number of time steps for the simulation, the second argument is a list of (key, value) pairs ("runtime arguments") that are made available to every agent in the system, and the third argument is a list of information about each agent (namely, a two-tuple containing the agent function of type Agent_t and a list of broadcast channel IDs to which the agent will listen). Output is sent to a file. Each agent is uniquely identified by its position in the list of agent information - the first agent has ID=1, the second has ID=2 and so on. ID=0 is reserved for the "simulator harness" function that mediates messaging and controls the passage of time during simulation. The agent identifiers are used to specify the source and destination of all one-to-one messages.

    Agents are functions that consume a potentially-infinite list of inbound messages and generate a potentially-infinite list of outbound messages. At each time step an agent must consume one item from the inbound list and must generate one new item on the outbound list. Each item in these lists is itself a list, so that at each time step an agent may receive multiple inbound messages and may generate multiple outbound messages. If an agent does not have any messages to receive or send at a given time step then it will either receive or generate an empty list. Optionally an agent may distinguish between an output item that is an empty list by mistake and an output item that is an empty list by design - it does this by generating an output item that is an empty list containing the distinguished empty message called a "Hiaton".

  • InterDyne Modelling Example

    InterDyne Modelling Example

    The InterDyne Simulator - Modelling Example

    In early experiments all messages were sent using agent identifers. It is still possible to do this, but we have since found it useful to refer to agents by name rather than by ID number, and we now recommend inclusion in the list of "runtime arguments" a function that when executed will convert an ID into a name or vice-versa. This is included in the runtime arguments so that it can be used by any agent and so that the mapping of names to identifers is in the control of the experimenter. The following example illustrates how the function sim can be called:

    import Simulations.RuntimeArgFunctions as RTAFuns

    exampleExperiment :: IO ()

    exampleExperiment

    = do

    sim 60 myargs (map snd myagents)

    where

    myargs = [ convert ]

    myagents = [ ("Trader", (traderWrapper, [1])),

    ("Broker", (brokerWrapper, [3])),

    ("Exchange",(exchangeWrapper, [2,3]))

    ]

    convert = RTAFuns.generateAgentBimapArg myagents

    In the above example sim is instructed to run for 60 time steps; there is only one runtime argument called convert, and the third argument to sim is a list of agents and broadcast channels on which they will listen. The convert function is a partial application of the library function generateAgentBimapArg (available in InterDyne v0.25 and later) to myagents - the result will be a function that will convert a name to an ID and vice versa. The first agent subscribes to broadcast channel 1, the second subscribes to channel 3, and the third subscribes to channels 2 and 3. This example does not illustrate how to define an output file for the results, nor how to use names instead of integers for broadcast channels, nor how to specify the legal communications topology and the delays that should be applied to each communication link. It does however indicate the parsimonious style that can be achieved when using Haskell.

    Agents are typically (but not always) written in two parts: (i) a "wrapper" function that manages the consumption of inbound messages, the generation of outbound messages, and the update of local state, and (ii) a "logic" function that is called by the wrapper function and which calculates the messages to be sent. The "wrapper" function is the true agent function, and it must be of type Agent_t

    Here is a simple agent wrapper that does nothing (at each time step it consumes an inbound item, and creates an empty outbound item). It does not call a logic function:

    f :: Agent_t f st args ((t, msgs, bcasts) : rest) myid = [] : (f st args rest myid)

    The agent function is recursively defined and loops once per time step. It takes four arguments: st is a local state variable (in this example it is never inspected and never changed), args is a copy of the runtime arguments (every agent is passed a copy of all the runtime arguments), the last argument myid is the ID of this agent (which is decided by the simulator and should never be changed) and the third argument is the list of inbound items - each item is a 3-tuple containing (i) the current time (an integer); (ii) a list of all one-to-one messages sent to this agent by other agents; and (iii) a list of all broadcast messages available at this time step on all the broadcast channels to which this agent is subscribed.

  • InterDyne Simulator Topology & Delays

    InterDyne Simulator Topology & Delays

    The InterDyne Simulator - Topology & Delays

    When considering dynamical interaction between system components, we may wish to (i) define in advance the topology of legal interactions, and (ii) define the interaction delay that should be applied to such legal interactions (noting that delays may be asymmetric - i.e. that the delay from component A to component B may not be the same as the delay from B to A). InterDyne does not require the topology and delays to be defined, but provides support for such definition (from version 0.24 onwards).

    To define topology and delays, two runtime arguments must be passed to the "sim" function (both arguments must be present): first, a function that takes two agent IDs (integers that uniquely specify the start point and end point of an interaction) and returns an integer delay in timesteps; and second, the maximum delay in the system. From InterDyne version 0.25 the delay function argument uses the data constructor DelayArg and the identifying string "DelayArg" and for backwards compatibility the maximum delay is a Double but should represent a whole number of time steps. The experimenter has complete freedom to define the delay function, and if the interaction specified by the two agent IDs is illegal then the delay function should raise an error. Here is a very simple example for three agents: import Simulations.RuntimeArgFunctions as RTAFuns exampleExperiment :: IO () exampleExperiment = do sim 60 myargs (map snd myagents) where myargs = [ convert, (Arg (Str "maxDelay", maxDelay)), (DelayArg (Str "DelayArg", delay)) ] myagents = [ ("Trader", (traderWrapper, [1])), ("Broker", (brokerWrapper, [3])), ("Exchange",(exchangeWrapper, [2,3])) ] convert = RTAFuns.generateAgentBimapArg myagents delay 1 2 = 1 delay 1 x = error "illegal interaction" delay 2 x = 2 delay 3 2 = 3 delay 3 x = error "illegal interaction" maxDelay = fromIntegral 3 InterDyne experimenters often find it convenient to drive the delay function from an adjacency matrix. After setting the delays as shown above, the experimenter need do no more; one-to-one messages are automatically delayed by the stated number of timesteps, and broadcast messages are split into separate messages for each recipient and delayed by the amount stated for each link.
Enhancing blockchain scalability with snake optimization algorithm: a novel approach

Scalability remains a critical challenge for blockchain technology, limiting its potential for widespread adoption in high-demand transactional systems. This paper proposes an innovative solution to this challenge by applying the Snake Optimization Algorithm (SOA) to a blockchain framework, aimed at enhancing transaction throughput and reducing latency. A thorough literature review contextualizes our work within the current state of blockchain scalability efforts. We introduce a methodology that integrates SOA into the transaction validation process of a blockchain network. The effectiveness of this approach is empirically evaluated by comparing transaction processing times before and after the implementation of SOA. The results show a substantial reduction in latency, with the optimized system achieving lower average transaction times across various transaction volumes. Notably, the latency for processing batches of 10 and 100 transactions decreased from 30.29 ms to 155.66 ms–0.42 ms and 0.37 ms, respectively, post optimization. These findings indicate that SOA is exceptionally efficient in batch transaction scenarios, presenting an inverse scalability behavior that defies typical system performance degradation with increased load. Our research contributes a significant advancement in blockchain scalability, with implications for the development of more efficient and adaptable blockchain systems suitable for high throughput enterprise applications.

Data depth and core-based trend detection on blockchain transaction networks

Blockchains are significantly easing trade finance, with billions of dollars worth of assets being transacted daily. However, analyzing these networks remains challenging due to the sheer volume and complexity of the data. We introduce a method named InnerCore that detects market manipulators within blockchain-based networks and offers a sentiment indicator for these networks. This is achieved through data depth-based core decomposition and centered motif discovery, ensuring scalability. InnerCore is a computationally efficient, unsupervised approach suitable for analyzing large temporal graphs. We demonstrate its effectiveness by analyzing and detecting three recent real-world incidents from our datasets: the catastrophic collapse of LunaTerra, the Proof-of-Stake switch of Ethereum, and the temporary peg loss of USDC–while also verifying our results against external ground truth. Our experiments show that InnerCore can match the qualified analysis accurately without human involvement, automating blockchain analysis in a scalable manner, while being more effective and efficient than baselines and state-of-the-art attributed change detection approach in dynamic graphs.

Enhanced scalability and privacy for blockchain data using Merklized transactions

Blockchain technology has evolved beyond the use case of electronic cash and is increasingly used to secure, store, and distribute data for many applications. Distributed ledgers such as Bitcoin have the ability to record data of any kind alongside the transfer of monetary value. This property can be used to provide a source of immutable, tamper-evident data for a wide variety applications spanning from the supply chain to distributed social media. However, this paradigm also presents new challenges regarding the scalability of data storage protocols, such that the data can be efficiently accessed by a large number of users, in addition to maintaining privacy for data stored on the blockchain. Here, we present a new mechanism for constructing blockchain transactions using Merkle trees comprised of transaction fields. Our construction allows for transaction data to be verified field-wise using Merkle proofs. We show how the technique can be implemented either at the system level or as a second layer protocol that does not require changes to the underlying blockchain. This technique allows users to efficiently verify blockchain data by separately checking targeted individual data items stored in transactions. Furthermore, we outline how our protocol can afford users improved privacy in a blockchain context by enabling network-wide data redaction. This feature of our design can be used by blockchain nodes to facilitate easier compliance with regulations such as GDPR and the right to be forgotten.

Smart contract life-cycle management: an engineering framework for the generation of robust and verifiable smart contracts

The concept of smart contracts (SCs) is becoming more prevalent, and their application is gaining traction across many diverse scenarios. However, producing poorly constructed contracts carries significant risks, including the potential for substantial financial loss, a lack of trust in the technology, and the risk of exposure to cyber-attacks. Several tools exist to assist in developing SCs, but their limited functionality increases development complexity. Expert knowledge is required to ensure contract reliability, resilience, and scalability. To overcome these risks and challenges, tools and services based on modeling and formal techniques are required that offer a robust methodology for SC verification and life-cycle management. This study proposes an engineering framework for the generation of a robust and verifiable smart contract (GRV-SC) framework that covers the entire SC life-cycle from design to deployment stages. It adopts SC modeling and automated formal verification methodologies to detect security vulnerabilities and improve resilience, extensibility, and code optimization to mitigate risks associated with SC development. Initially, the framework includes the implementation of a formal approach, using colored Petri nets (CPNs), to model cross-platform Digital Asset Modeling Language (DAML) SCs. It also incorporates a specialized type safety dynamic verifier, which is designed to detect and address new vulnerabilities that can arise in DAML contracts, such as access control and insecure direct object reference (Idor) vulnerabilities. The proposed GRV-SC framework provides a holistic approach to SC life-cycle management and aims to enhance the security, reliability, and adoption of SCs.

Decentralized autonomous organization design for the commons and the common good

The current internet economy is characterised by a historically unprecedented bundling of private sector power over infrastructures. This situation is harmful for overcoming problems where collective action is needed, such as for governing digital commons. Organisations that run on collectively owned decentralised infrastructure are able to overcome this centralisation of power. These common decentralised autonomous organisations (DAOs) could help in fostering digitally enabled collective action. However, currently we have no clear view of how a DAO designed for commons governance would operate and be governed. By creating a conceptual prototype of a DAO governing a common, we provide a clear path of how common DAOs should mature and which tools are needed to create them. In this research, we created a governance framework for common DAOs by combining 16 works on technology for commons governance. The framework reveals that common DAO governance consists of three areas: 1) Governance structure, 2) Enabling technology, and 3) Community governance. We provide governance mechanisms that together describe an implementation of Ostrom’s common governance principles in a DAO. This work is a synthesis of previous research on technology for collective action. The proposed framework aids in standardising DAO governance for the common good and may contribute to a large scale roll-out of commons DAOs.

Decentralized token economy theory (DeTEcT): token pricing, stability and governance for token economies

This paper presents a pioneering approach for simulation of economic activity, policy implementation, and pricing of goods in token economies. The paper proposes a formal analysis framework for wealth distribution analysis and simulation of interactions between economic participants in an economy. Using this framework, we define a mechanism for identifying prices that achieve the desired wealth distribution according to some metric, and stability of economic dynamics. The motivation to study tokenomics theory is the increasing use of tokenization, specifically in financial infrastructures, where designing token economies is in the forefront. Tokenomics theory establishes a quantitative framework for wealth distribution amongst economic participants and implements the algorithmic regulatory controls mechanism that reacts to changes in economic conditions. In our framework, we introduce a concept of tokenomic taxonomy where agents in the economy are categorized into agent types and interactions between them. This novel approach is motivated by having a generalized model of the macroeconomy with controls being implemented through interactions and policies. The existence of such controls allows us to measure and readjust the wealth dynamics in the economy to suit the desired objectives.

Provenance blockchain for ensuring IT security in cloud manufacturing

Provenance blockchain is an evolving concept for protection of production, logistics, and supply chain networks from rogue Industrial Internet of Things (IIoT) devices. Such rogue IIoT devices are a recognized threat in the cloud manufacturing networks. In extreme cases, they can be used to cause industrial accidents. A brief version of provenance is about end-to-end tracking and tracing of data and the nodes involved in creating, modifying, transmitting, storing, and deleting it at specific times and locations. It provides an end-to-end verifiable and controlled computation for ensuring trustworthiness, quality, reliability, and validity of data. Provenance has existed in computing using logging software systems. This research is focused on threats to food supply chains between two countries. A scenario for protecting food supply chain from India to UAE has been modeled. This research recognized the threat of harmful food items getting mixed with flow of genuine products in a supply chain. The IIoT devices used to control the flow can be authenticated using the evolving provenance blockchain technology. With the help of recent design recommendations in the literature, a model design has been created and simulated in this research. Observations from the simulation revealed that TCP congestions and unpredictable turnaround time for assigning cryptographic keys to IIoT device sessions may have to be explored in future. A collaborative design between the two nations has been proposed. All IIoT devices not supporting cryptography will be eliminated from the cloud manufacturing and supply chain networks. Currently, this design may be used for one time registration only. Future studies may provide improved versions in which, repeated authentication and keys replacements may be implemented.

Decentralized justice: state of the art, recurring criticisms and next-generation research topics

Decentralized justice is a novel approach to online dispute resolution based on blockchain, crowdsourcing and game theory for adjudicating claims in a neutral and efficient way. Since the launch of the first decentralized justice platform in 2018, the field has attracted wide interest both from practitioners and academics in Web3 and dispute resolution. The decentralized justice approach is based on the ideas of decentralization, economic incentives and a claim to fairness in its decisions. At the current stage of development, decentralized justice is facing a number of technical, market, legal and ethical challenges for further development. This paper provides a review of the short history of decentralized justice, addresses a number of recurrent topics and lays down a path for future exploration.

How Chinese fintech threatens US Dollar hegemony

This article will argue that the development of Chinese financial technology, or ‘fintech’, over the past decade, is primarily motivated to safeguard Chinese monetary sovereignty, which is threatened by the proliferation of non-state cryptocurrencies, like Bitcoin, that have exacerbated the problem of capital flight, not only for China, but for other non-Western countries that have lost fortunes to outflows seeking access Western financial assets. This raises the question, how is China responding to the emergence of cryptocurrencies as a development that reinforces US financial hegemony? The answer to be explored by this paper is by embracing elements of cryptocurrency technology in the form of digital payment systems and blockchain technology. These Chinese fintech developments pose a serious unprecedented challenge to the financial hegemony of the US insofar as it compels other countries to copy the Chinese response because they desire the tools to limit illegal outflows of capital that have historically propped up the US Dollar.

Bitcoin equilibrium dynamics: a long term approach

In the long run, Bitcoin transaction fees are the only source of revenue for miners. They compete broadly in two main ways: proof of work effort to win blocks; and transaction processing to gather fee rewards into the blocks they win. This paper contributes to existing literature by developing a dynamic model that separates these two functions, and explores implications for aggregate efficiency outcomes. Specifically, when set by free market forces (unrestricted by artificially imposed block size caps), what happens to overall transaction prices and quantities relative to total energy use? When is it worth Stackelberg-leading miners investing in efficiency-improving R&D? What effect does this have on overall efficiency over time? By explicitly separating specialised capital dedicated to SHA256 hashing (for proof of work) from transaction processing capital (for transaction collection and verification), this paper sheds light on these questions. One key conclusion is that miner innovation lowers energy use per transaction over time for elastic enough transaction demand schedules. The more competitors Bitcoin has (existing fiat and data services, and other new Blockchain-based systems), the stronger is this conclusion.