Skip to main content

Clack is Joint Field Chief Editor for the research journal Frontiers in Blockchain. Recent publications from Frontiers in Blockchain are given below:


  • Research Profile

    Research Profile

    On 24th October 2009 Christopher D. Clack was awarded the Doctor of Science (ScD) degree from the University of Cambridge - the university’s highest degree, awarded for distinction in the advancement of science, and conferred on scientists "with a proven record of internationally recognised scholarship, including substantial and sustained contributions to scientific knowledge".

    This award recognises Clack's contribution to Computer Science in a research career at UCL that started in 1984 with the development of the world's first parallel graph reduction computer system made from stock hardware. Since then, Clack's research has covered four areas:

    • Functional Programming;
    • Genetic Computation;
    • Agent-Based Simulation; and
    • Financial Computing

    Distinguishing features of Clack's research have been his ability to engage with scientists of other disciplines, to explore how research developments in Computer Science can benefit science more broadly, and to validate the impact of his research via engagement with industry.

    In 2009 Clack secured funding from industry, the Economic and Social Research Council, the Natural Environment Research Council, and the Technology Strategy Board to launch a national Knowledge Transfer Network for financial services.

  • The InterDyne Simulator (2011-)

    The InterDyne Simulator (2011-)

    The InterDyne Simulator is designed to support exploration of interaction dynamics and feedback effects in non-smooth complex systems. It is a general-purpose tool that can be applied to a wide variety of systems, though at UCL its primary use has been to simulate interaction dynamics in the financial markets.

    • InterDyne is a discrete-time simulator, and simulations proceed for a stated number of time steps. The mapping of a simulator time step to a period of real time is a semantic issue for the designer of the experiment being run on InterDyne.

    • InterDyne is an intrinsically deterministic simulator - a simulation will always behave the same way every time it is run (unless the programmer expressly includes nondeterminism). This determinism greatly assists the understanding of the low-level interactions that cause complex behaviour, i.e. it facilitates determination of the causal pathway of a particular behaviour. Despite being intrinsically deterministic, Interdyne permits two types of non-determinism to be expressed - (i) the programmer may include non-deterministic (or pseudo-non-deterministic) elements in the code for a component; and (ii) where a simulation component receives many data items in one time step from many other components, Interdyne may be instructed to provide those data items either sorted according to the sender's identity or sorted pseudo-randomly - however, the pseudo-random behaviour will be identical each time the simulation is run. If it is desired to run a simulation multiple times, each time with a different behaviour, then the pseudo-random behaviour can be provided with a different seed on each run.
    • InterDyne interaction is effected via communication between components; InterDyne supports both one-to-one communication and one-to-many communication.
    • InterDyne supports the precise definition of communication topology between components, to determine which interactions (communications) are permitted and which are not. This facilitates the design and implementation of simulations; an InterDyne simulation is a directed graph where the nodes are components (such as a trader, a news feed, or an exchange) and the edges are communication links.
    • InterDyne supports the specification of a separate information delay for each possible interaction path defined in the communication topology; these delays are applied to both one-to-one communications and one-to-many communications.
    • InterDyne permits components to be modelled at differing levels of detail. For example, one component may represent a trading algorithm modelled in great detail including internal data structures, interacting with another component that is modelled as a simple function generating orders according to a required statistical distribution.
    • InterDyne simulations are programmed using a functional language - the most recent version uses Haskell. This facilitates rapid development of simulations, and permits such simulations to be viewed as executable specifications.
    • InterDyne simulations may be interpreted either as executable specifications or as agent-based simulations.
    • The primary output from an InterDyne simulation is a trace file, suitable for further analysis.

    Development of the InterDyne simulator

    Development of the InterDyne Simulator began in 2011, initially implemented using the functional language Miranda and then ported to Haskell. Several UCL students and researchers have been involved either in the development of InterDyne, or in using InterDyne to run experiments in Interaction Dynamics. [Learn more]

    InterDyne documents

    A draft InterDyne User Manual is available; this provides an introduction to the basic features of InterDyne. Other working papers and project dissertations related to InterDyne are also available. [Learn more]

  • InterDyne Simulator

    InterDyne Simulator

    Development of the InterDyne Simulator began in 2011, initially implemented using the functional language Miranda (link)(link to book).

    To improve performance and to take advantage of improved profiling tools, InterDyne has since been ported to the functional language Haskell (link). The Glasgow Haskell Compiler (GHC) generates optimised and efficient native code.

    Several UCL students and researchers have been involved either in the development of InterDyne, or in using InterDyne to run experiments in Interaction Dynamics. These have included (with apologies for any ommissions):

    • Elias Court, who made significant contributions to the Miranda version of InterDyne, and to our understanding of interaction dynamics between HFT market makers.
    • Richard Everett, who helped explore mechanisms for state-space analysis
    • Kyle Liu, who undertook the initial port of InterDyne from Miranda to Haskell
    • Dmitrijs Zaparanuks, who helped resolve initial problems with the Haskell port, and contributed greatly to the analysis and understanding of interaction dynamics between HFT market makers
    • Justin Moser, who undertook initial experiments to explore the HFT "front-running" claims made by Michael Lewis
    • Aman Chopra, who conducted a more detailed exploration of Lewis's HFT "front-running" claims and also helped to resolve some subtle problems with the Haskell port
    • Vikram Bakshi, who substantially improved InterDyne's internal infrastructure and contributed new agents and a FIX messaging engine (link)
    • Saagar Hemrajani, who has explored the impact of Reg NMS on Lewis's HFT "front-running" claims (link)
    • Florian Obst, who has developed a visualisation tool for InterDyne trace files, to assist in detecting emergent behaviour

    Prior to the development of InterDyne, other approaches to simulating Interaction Dynamics were explored. InterDyne occupies a "Goldilocks position" that neither models at a level of abstraction that is too high (eg probabilistic modelling) nor at a level that is too low (eg process modelling). Several student projects contributed to previous approaches, and these have included:

  • Interdyne Documents

    Interdyne Documents

    A draft InterDyne User Manual is available; this provides an introduction to the basic features of InterDyne:

    The following working paper illustrates how InterDyne's dynamic interaction model can be used to gain in-depth understanding of financial market instability:

    The InterDyne Simulator is not yet properly documented. However, several student projects at UCL have been based on either the development of the underlying technology or the use of the simulator, or both. Some related project dissertations are provided below:

    Vikram Bakshi implemented a fairly comprehensive FIX engine (so that components in a simulation of financial markets can interact via messages that contain information in a way that corresponds fairly closely to reality) and also implemented a component that simulates a subset of the behaviour of the BATS BYX exchange. Note however that although an exchange may include a FIX tag within one of its defined input or output messages it might only accept a subset of the qualifiers which are defined by FIX. Vikram therefore designed a mechanism by which exchange-specific messages can be defined, and can be tested for correctness by the compiler (using the type system). The mechanism for interaction with the BATS BYX exchange component is explained briefly in the following document: (here).

  • InterDyne Modelling

    InterDyne Modelling

    The InterDyne Simulator - Modelling

    An InterDyne model can be viewed either as an analytically-tractable executable specification or as an agent-based model. Taking the latter approach, the model consists of a number of agents communicating via the passing of messages. By defining in advance which agents may communicate with which other agents, a graph structure is created where the nodes are agents and the edges are permissible one-to-one communication paths. Agents may also send a one-to-many message to a broadcast channel; each agent declares in advance the broadcast channels to which it will listen.

    InterDyne is run by executing the function "sim" applied to appropriate arguments. The function sim has type sim :: Int -> [Arg_t] -> [(Agent_t, [Int])] -> IO() where the first argument is the number of time steps for the simulation, the second argument is a list of (key, value) pairs ("runtime arguments") that are made available to every agent in the system, and the third argument is a list of information about each agent (namely, a two-tuple containing the agent function of type Agent_t and a list of broadcast channel IDs to which the agent will listen). Output is sent to a file. Each agent is uniquely identified by its position in the list of agent information - the first agent has ID=1, the second has ID=2 and so on. ID=0 is reserved for the "simulator harness" function that mediates messaging and controls the passage of time during simulation. The agent identifiers are used to specify the source and destination of all one-to-one messages.

    Agents are functions that consume a potentially-infinite list of inbound messages and generate a potentially-infinite list of outbound messages. At each time step an agent must consume one item from the inbound list and must generate one new item on the outbound list. Each item in these lists is itself a list, so that at each time step an agent may receive multiple inbound messages and may generate multiple outbound messages. If an agent does not have any messages to receive or send at a given time step then it will either receive or generate an empty list. Optionally an agent may distinguish between an output item that is an empty list by mistake and an output item that is an empty list by design - it does this by generating an output item that is an empty list containing the distinguished empty message called a "Hiaton".

  • InterDyne Modelling Example

    InterDyne Modelling Example

    The InterDyne Simulator - Modelling Example

    In early experiments all messages were sent using agent identifers. It is still possible to do this, but we have since found it useful to refer to agents by name rather than by ID number, and we now recommend inclusion in the list of "runtime arguments" a function that when executed will convert an ID into a name or vice-versa. This is included in the runtime arguments so that it can be used by any agent and so that the mapping of names to identifers is in the control of the experimenter. The following example illustrates how the function sim can be called:

    import Simulations.RuntimeArgFunctions as RTAFuns

    exampleExperiment :: IO ()

    exampleExperiment

    = do

    sim 60 myargs (map snd myagents)

    where

    myargs = [ convert ]

    myagents = [ ("Trader", (traderWrapper, [1])),

    ("Broker", (brokerWrapper, [3])),

    ("Exchange",(exchangeWrapper, [2,3]))

    ]

    convert = RTAFuns.generateAgentBimapArg myagents

    In the above example sim is instructed to run for 60 time steps; there is only one runtime argument called convert, and the third argument to sim is a list of agents and broadcast channels on which they will listen. The convert function is a partial application of the library function generateAgentBimapArg (available in InterDyne v0.25 and later) to myagents - the result will be a function that will convert a name to an ID and vice versa. The first agent subscribes to broadcast channel 1, the second subscribes to channel 3, and the third subscribes to channels 2 and 3. This example does not illustrate how to define an output file for the results, nor how to use names instead of integers for broadcast channels, nor how to specify the legal communications topology and the delays that should be applied to each communication link. It does however indicate the parsimonious style that can be achieved when using Haskell.

    Agents are typically (but not always) written in two parts: (i) a "wrapper" function that manages the consumption of inbound messages, the generation of outbound messages, and the update of local state, and (ii) a "logic" function that is called by the wrapper function and which calculates the messages to be sent. The "wrapper" function is the true agent function, and it must be of type Agent_t

    Here is a simple agent wrapper that does nothing (at each time step it consumes an inbound item, and creates an empty outbound item). It does not call a logic function:

    f :: Agent_t f st args ((t, msgs, bcasts) : rest) myid = [] : (f st args rest myid)

    The agent function is recursively defined and loops once per time step. It takes four arguments: st is a local state variable (in this example it is never inspected and never changed), args is a copy of the runtime arguments (every agent is passed a copy of all the runtime arguments), the last argument myid is the ID of this agent (which is decided by the simulator and should never be changed) and the third argument is the list of inbound items - each item is a 3-tuple containing (i) the current time (an integer); (ii) a list of all one-to-one messages sent to this agent by other agents; and (iii) a list of all broadcast messages available at this time step on all the broadcast channels to which this agent is subscribed.

  • InterDyne Simulator Topology & Delays

    InterDyne Simulator Topology & Delays

    The InterDyne Simulator - Topology & Delays

    When considering dynamical interaction between system components, we may wish to (i) define in advance the topology of legal interactions, and (ii) define the interaction delay that should be applied to such legal interactions (noting that delays may be asymmetric - i.e. that the delay from component A to component B may not be the same as the delay from B to A). InterDyne does not require the topology and delays to be defined, but provides support for such definition (from version 0.24 onwards).

    To define topology and delays, two runtime arguments must be passed to the "sim" function (both arguments must be present): first, a function that takes two agent IDs (integers that uniquely specify the start point and end point of an interaction) and returns an integer delay in timesteps; and second, the maximum delay in the system. From InterDyne version 0.25 the delay function argument uses the data constructor DelayArg and the identifying string "DelayArg" and for backwards compatibility the maximum delay is a Double but should represent a whole number of time steps. The experimenter has complete freedom to define the delay function, and if the interaction specified by the two agent IDs is illegal then the delay function should raise an error. Here is a very simple example for three agents: import Simulations.RuntimeArgFunctions as RTAFuns exampleExperiment :: IO () exampleExperiment = do sim 60 myargs (map snd myagents) where myargs = [ convert, (Arg (Str "maxDelay", maxDelay)), (DelayArg (Str "DelayArg", delay)) ] myagents = [ ("Trader", (traderWrapper, [1])), ("Broker", (brokerWrapper, [3])), ("Exchange",(exchangeWrapper, [2,3])) ] convert = RTAFuns.generateAgentBimapArg myagents delay 1 2 = 1 delay 1 x = error "illegal interaction" delay 2 x = 2 delay 3 2 = 3 delay 3 x = error "illegal interaction" maxDelay = fromIntegral 3 InterDyne experimenters often find it convenient to drive the delay function from an adjacency matrix. After setting the delays as shown above, the experimenter need do no more; one-to-one messages are automatically delayed by the stated number of timesteps, and broadcast messages are split into separate messages for each recipient and delayed by the amount stated for each link.
Integrated cybersecurity for metaverse systems operating with artificial intelligence, blockchains, and cloud computing
In the ever-evolving realm of cybersecurity, the increasing integration of Metaverse systems with cutting-edge technologies such as Artificial Intelligence (AI), Blockchain, and Cloud Computing presents a host of new opportunities alongside significant challenges. This article employs a methodological approach that combines an extensive literature review with focused case study analyses to examine the changing cybersecurity landscape within these intersecting domains. The emphasis is particularly on the Metaverse, exploring its current state of cybersecurity, potential future developments, and the influential roles of AI, blockchain, and cloud technologies. Our thorough investigation assesses a range of cybersecurity standards and frameworks to determine their effectiveness in managing the risks associated with these emerging technologies. Special focus is directed towards the rapidly evolving digital economy of the Metaverse, investigating how AI and blockchain can enhance its cybersecurity infrastructure whilst acknowledging the complexities introduced by cloud computing. The results highlight significant gaps in existing standards and a clear necessity for regulatory advancements, particularly concerning blockchain’s capability for self-governance and the early-stage development of the Metaverse. The article underscores the need for proactive regulatory involvement, stressing the importance of cybersecurity experts and policymakers adapting and preparing for the swift advancement of these technologies. Ultimately, this study offers a comprehensive overview of the current scenario, foresees future challenges, and suggests strategic directions for integrated cybersecurity within Metaverse systems utilising AI, blockchain, and cloud computing.
Blockchain in the courtroom: exploring its evidentiary significance and procedural implications in U.S. judicial processes
This paper explores the evidentiary significance of blockchain records and the procedural implications of integrating this technology into the U.S. judicial system, as several states have undertaken legislative measures to facilitate the admissibility of blockchain evidence. We employ a comprehensive methodological approach, including legislative analysis, comparative case law analysis, technical examination of blockchain mechanics, and stakeholder engagement. Our study suggests that blockchain evidence may be categorized as hearsay exceptions or non-hearsay, depending on the specific characteristics of the records. The paper proposes a specialized consensus mechanism for standardizing blockchain evidence authentication and outlines strategies to enhance the technology’s trustworthiness. It also highlights the importance of expert testimony in clarifying blockchain’s technical aspects for legal contexts. This study contributes to understanding blockchain’s integration into judicial systems, emphasizing the need for a comprehensive approach to its admissibility and reliability as evidence. It bridges the gap between technology and law, offering a blueprint for standardizing legal approaches to blockchain and urging ethical and transparent technology use.
Private law framework for blockchain
Current attempts to regulate blockchain technology are mainly based on securities law framework, which considers crypto tokens and digital assets as either securities, currencies or derivatives thereof. The main limitation of such approach lies in its inability to accommodate the diverse legal rights, obligations and assets that blockchain technology can virtually reproduce. Already in 2017–2018 there were attempts to tokenize rights outside of securities law framework, these initiatives served more as makeshift solutions to circumvent securities regulations than as thorough frameworks for managing real-world assets and commercial activities. This article conducts a comparative and historical analysis of blockchain regulatory initiatives in Europe and the US, positing that the regulation of blockchain technology through a securities law lens is driven by reactionary opportunism. Such a basis is deemed inappropriate and insufficient, as securities laws being a field of public law were not designed to govern real-world assets and commerce, which fundamentally rely on the principles of laissez-faire and freedom of contract inherent in private law. A regulatory stance focused solely on public law overlooks the full potential of blockchain technology, and risks stifling innovation and practical applications. To illustrate this, the article presents case study of tokenization of contractual rights demonstrating that securities law-focused legal regulations, such as the EU Regulation 2023/1114 on Markets in Crypto-Assets (MiCA) and Regulation 2022/858 on Distributed Ledger Technology (DLT), inadequately address the field of private commerce. Based on the analysis, the article concludes that comprehensive legal framework for blockchain technology shall combine public and private law regime akin to the regulation of traditional rights, obligations and assets.
Universal basic income on blockchain: the case of circles UBI
The paper reviews Circles UBI as an illustrative case study of implementing the idea of universal basic income (UBI) on blockchain. Circles was born out of the Gnosis Chain as a more democratic alternative to Bitcoin coupled with the ambitious political project of algorithmically distributing UBI. Backed by the Gnosis Chain, Circles Coop was founded in 2020 to implement this idea in Berlin. Examining the failure of the Berlin pilot helps us draw substantial conclusions with regard to the implementation of UBI on blockchain. UBI alone, on blockchain or not, is not enough to solve the problems its proponents argue against. UBI would be helpful as a tool if plugged into a model of production embedded into a political strategy aiming to fix key problems of current societies such as gaping inequalities and climate change. We give a snapshot here of the model of open cooperativism as a counter-hegemonic political project vis-à-vis neoliberalism. Circles UBI could plug into the model of open cooperativism as a distribution and liquidity injection mechanism to foster the transition towards a commons-based ethical and sustainable post-capitalist economy.
Challenges of user data privacy in self-sovereign identity verifiable credentials for autonomous building access during the COVID-19 pandemic
Self-sovereign identity is an emerging blockchain technology field. Its use cases primarily surround identity and credential management and advocate the privacy of user details during the verification process. Our endeavor was to test and implement the features promoted for self-sovereign identity through open- and closed-source frameworks utilizing a scenario of building access management to adhere to health risk and safety questionnaires during the COVID-19 pandemic. Our investigation identifies whether user data privacy could be ensured through verifiable credentials and whether business practices would need to evolve to mitigate storing personal data centrally.
Enhancing blockchain scalability with snake optimization algorithm: a novel approach
Scalability remains a critical challenge for blockchain technology, limiting its potential for widespread adoption in high-demand transactional systems. This paper proposes an innovative solution to this challenge by applying the Snake Optimization Algorithm (SOA) to a blockchain framework, aimed at enhancing transaction throughput and reducing latency. A thorough literature review contextualizes our work within the current state of blockchain scalability efforts. We introduce a methodology that integrates SOA into the transaction validation process of a blockchain network. The effectiveness of this approach is empirically evaluated by comparing transaction processing times before and after the implementation of SOA. The results show a substantial reduction in latency, with the optimized system achieving lower average transaction times across various transaction volumes. Notably, the latency for processing batches of 10 and 100 transactions decreased from 30.29 ms to 155.66 ms–0.42 ms and 0.37 ms, respectively, post optimization. These findings indicate that SOA is exceptionally efficient in batch transaction scenarios, presenting an inverse scalability behavior that defies typical system performance degradation with increased load. Our research contributes a significant advancement in blockchain scalability, with implications for the development of more efficient and adaptable blockchain systems suitable for high throughput enterprise applications.
Data depth and core-based trend detection on blockchain transaction networks
Blockchains are significantly easing trade finance, with billions of dollars worth of assets being transacted daily. However, analyzing these networks remains challenging due to the sheer volume and complexity of the data. We introduce a method named InnerCore that detects market manipulators within blockchain-based networks and offers a sentiment indicator for these networks. This is achieved through data depth-based core decomposition and centered motif discovery, ensuring scalability. InnerCore is a computationally efficient, unsupervised approach suitable for analyzing large temporal graphs. We demonstrate its effectiveness by analyzing and detecting three recent real-world incidents from our datasets: the catastrophic collapse of LunaTerra, the Proof-of-Stake switch of Ethereum, and the temporary peg loss of USDC–while also verifying our results against external ground truth. Our experiments show that InnerCore can match the qualified analysis accurately without human involvement, automating blockchain analysis in a scalable manner, while being more effective and efficient than baselines and state-of-the-art attributed change detection approach in dynamic graphs.
Enhanced scalability and privacy for blockchain data using Merklized transactions
Blockchain technology has evolved beyond the use case of electronic cash and is increasingly used to secure, store, and distribute data for many applications. Distributed ledgers such as Bitcoin have the ability to record data of any kind alongside the transfer of monetary value. This property can be used to provide a source of immutable, tamper-evident data for a wide variety applications spanning from the supply chain to distributed social media. However, this paradigm also presents new challenges regarding the scalability of data storage protocols, such that the data can be efficiently accessed by a large number of users, in addition to maintaining privacy for data stored on the blockchain. Here, we present a new mechanism for constructing blockchain transactions using Merkle trees comprised of transaction fields. Our construction allows for transaction data to be verified field-wise using Merkle proofs. We show how the technique can be implemented either at the system level or as a second layer protocol that does not require changes to the underlying blockchain. This technique allows users to efficiently verify blockchain data by separately checking targeted individual data items stored in transactions. Furthermore, we outline how our protocol can afford users improved privacy in a blockchain context by enabling network-wide data redaction. This feature of our design can be used by blockchain nodes to facilitate easier compliance with regulations such as GDPR and the right to be forgotten.
Smart contract life-cycle management: an engineering framework for the generation of robust and verifiable smart contracts
The concept of smart contracts (SCs) is becoming more prevalent, and their application is gaining traction across many diverse scenarios. However, producing poorly constructed contracts carries significant risks, including the potential for substantial financial loss, a lack of trust in the technology, and the risk of exposure to cyber-attacks. Several tools exist to assist in developing SCs, but their limited functionality increases development complexity. Expert knowledge is required to ensure contract reliability, resilience, and scalability. To overcome these risks and challenges, tools and services based on modeling and formal techniques are required that offer a robust methodology for SC verification and life-cycle management. This study proposes an engineering framework for the generation of a robust and verifiable smart contract (GRV-SC) framework that covers the entire SC life-cycle from design to deployment stages. It adopts SC modeling and automated formal verification methodologies to detect security vulnerabilities and improve resilience, extensibility, and code optimization to mitigate risks associated with SC development. Initially, the framework includes the implementation of a formal approach, using colored Petri nets (CPNs), to model cross-platform Digital Asset Modeling Language (DAML) SCs. It also incorporates a specialized type safety dynamic verifier, which is designed to detect and address new vulnerabilities that can arise in DAML contracts, such as access control and insecure direct object reference (Idor) vulnerabilities. The proposed GRV-SC framework provides a holistic approach to SC life-cycle management and aims to enhance the security, reliability, and adoption of SCs.
Decentralized autonomous organization design for the commons and the common good
The current internet economy is characterised by a historically unprecedented bundling of private sector power over infrastructures. This situation is harmful for overcoming problems where collective action is needed, such as for governing digital commons. Organisations that run on collectively owned decentralised infrastructure are able to overcome this centralisation of power. These common decentralised autonomous organisations (DAOs) could help in fostering digitally enabled collective action. However, currently we have no clear view of how a DAO designed for commons governance would operate and be governed. By creating a conceptual prototype of a DAO governing a common, we provide a clear path of how common DAOs should mature and which tools are needed to create them. In this research, we created a governance framework for common DAOs by combining 16 works on technology for commons governance. The framework reveals that common DAO governance consists of three areas: 1) Governance structure, 2) Enabling technology, and 3) Community governance. We provide governance mechanisms that together describe an implementation of Ostrom’s common governance principles in a DAO. This work is a synthesis of previous research on technology for collective action. The proposed framework aids in standardising DAO governance for the common good and may contribute to a large scale roll-out of commons DAOs.