What if your salary was paid in Libra?

Though Libra has a long way to go before running in production, the Libra testnet was launched on the day of the official announcement of the Libra project
This testnet is experimental, but is supposed to have the same properties as the mainnet, making application migration from testnet to mainnet straightforward.
So it makes sense to look into Libra with this testnet, assuming that the regulatory challenges will settle one day and Libra will launch its mainnet.

As the nitty-gritty regulatory work has just started to begin for Libra to go into production one day, we can see how ambitious this project really is. It is quite possible that it actually will never be launched at all.

So, we moved our use case a little bit into the future, but thanks to the Libra testnet, we can try it right now! (…with worthless Libra testnet coins of course…)

Working for the Wintermute DAO

As we are far into the future, we can be creative about the payment processes then. So our scenario is this:

Henry and his hacker buddies are working for the Wintermute DAO. They are not employed (employment these days is only possible for goverment jobs), but working based on a contract they signed with the Wintermute DAO, an AI representing a decentrally organized corporation and concluding contracts with all kind of external parties. 

We are not really sure why and how, but Excel actually survived the atomic wars, the zombie apocalypse and Ragnarök and is used by the Wintermute DAO to store the obligations to the contractors:

The DAOs payment processor reads these datasets line by line, extracts the amount and Libra address and transfers the amount. Optionally (see below) the signature is verified to make sure the address actually belongs to the contractor.

Starting the Payroll…

  1. Start the java-libra-shell
  2. Create account with mnemonic with command a cwm ‘chaos crash develop error fix laptop logic machine network patch setup void’
  3. Generated account 0 is Wintermute’s sender account. For each hacker account, add the index to the mnemonic, eg. to create Henry’s account, use a cwm ‘chaos crash develop error fix laptop logic machine network patch setup void’ 1
  4. Note the account balances and sequence numbers with command q b INDEX, eg. q b 1 for Henry’s account
  5. Run the java-payment-processor
  6. Note how the balances have now changed, account 0 (the sender account) has 700 Libra less, accounts 1-5 balances have increased with 100, 120, …. and the sequence numbers changed accordingly

Implementing the Payment Process in Java

Using Libra from Java is really simple: we have implemented the application using jlibra and the extension jlibra-spring-boot-starter.

As you can see from the repo, there is not much to do: 

  1. Create a pom.xml or build.gradle for your application
  2. Specify the application.properties for your testnet
  3. Inject the utility classes: JLibra, Peer2PeerTransaction, QueryAccountBalance, etc.
  4. Use the pre-configured classes in your application

We implemented all steps necessary for the payment processor in the Main class, the crucial functionality (coin transfer) is implemented in Main.java

@Autowired
private PeerToPeerTransfer peerToPeerTransfer;

@Autowired
private AccountStateQuery accountStateQuery;

@Autowired
private JLibra jLibra;

private long transferLibraToReceiver(String receiverAddress, BigDecimal amount) {
long seqNo = fetchLatestSequenceNumber(KeyUtils.toByteArrayLibraAddress(SENDER_PUBLIC_KEY.getEncoded()));
PeerToPeerTransfer.PeerToPeerTransferReceipt receipt =
peerToPeerTransfer.transferFunds(receiverAddress, amount.longValue() * 1_000_000, ExtKeyUtils.publicKeyFromBytes(SENDER_PUBLIC_KEY.getEncoded()), ExtKeyUtils.privateKeyFromBytes(SENDER_PRIVATE_KEY.getEncoded()), jLibra.getGasUnitPrice(), jLibra.getMaxGasAmount());
System.out.println(" (seqNo " + seqNo + ") " + receipt.getStatus());
return seqNo;
}

That’s all. In our sample application, this works against the testnet with the preconfigured accounts (corresponding to the mnemonic stated in the README).

Real World Use Cases before the year 2525

But how can we use this before the arrival of AI-Hackers and decentralized autonomous organizations? And what if we want to get our salary in EUR and just want to do international payments with Libra?

What did we do here? We connected Libra to an existing Java application. This Java application can do whatever Java applications do our days, most likely some enterprisey stuff. 

So how about connecting the payment processor of your corporate ERP system? Easy. How about integrating in one of those huge EAI products? Peasy. Workflows, BPNM Engines, Data Science frameworks? Yes, just like that. If it’s accessible from Java (or one language of the other adapters like JavaScript, Python, Go, Rust) it can use Libra. 

Bonus: Authorizing Transactions

In the old banking world, just giving your account information to the employer was enough. The banks checked all the regulatory stuff and made sure that the right person got the right money – or reverse the transaction if it was not. In the Libra world, that’s not that easy. 

There is a simple way of making sure that the address given is actually belonging to the person authorized to get the payment: signatures. By signing a pre-defined message like
I am working for Wintermute DAO and own Libra address 0x123 which should retrieve my earnings. Signed, Peter on 12/12/2103. 
It is easy to link the address to the signature, making sure that the address is correct as nobody could have signed the message without access to the private key of the Libra address. 

This way, the payments are arguably more safe than the old way of transfering money, given that “the system” makes sure that the address is checked for AML and is KYCed. This will presumably be true for the “wallet-level” applications in Libra.

jlibra – Connecting Libra to Java

Having learned from other Blockchain implementations, Libra starts with a nice package, a sound framework and concept and several developer goodies which other implementations had to invent first.

Right from the start Libra keeps mass adoption and developer attraction in mind and therefore is designed to be language agnostic. Even though the node is implemented in Rust, a gRPC interface is co-deployed, so other languages and platforms can easily use Libra through gRPC. 

Using Libra 101

Libra did a nice job in creating a usable package for developers. Though Windows is currently not supported, with Mac and Linux installing a Libra node and client is straigthforward and a Testnet is publicly available, so developers can immediately begin to explore the Move language and the patterns of the Libra Blockchain.

With the Rust node a client is included and a gRPC interface is deployed, so accessing the node is possible for a variety of languages and platforms.

Entering the Java Universe

There already exist several libraries for accessing the services directly from the languages without having to deal with gRPC itself. As Libra defined their interfaces with Protobuf, Java is well suited for using the interface, as Protobuf supports a quite sophisticated type system, which can be mapped to respective Java types. In contrast to other languages like JavaScript and Python, Java is statically typed and therefore the Java accessor classes must be generated from the Protobuf definitions in a precompile step.

Schematic Overview of Libra Integration into Java Environment

In Java, this is most often done by some build system, for jlibra a Maven step is executed before the Java class compile step, which generates the Java classes from the Protobuf definitions. An advantage of this precompile step is type-safety: as soon as the Protobuf changes, the library doesn’t compile any more, instead of failing at runtime.

Using Libra from Java Applications

As jlibra still is low-level and meant as a thin library layer above the Libra API, using the library requires some boilerplate code in Java applications which want to access Libra.

To keep this boilerplate to a minimum, two projects have been created on top of jlibra to ease development with Libra from Java: jlibra-spring-boot-starter and java-libra-client.

Structure of jlibra projects

Using jlibra in custom Java applications

With the dependency 

<dependency>
<groupId>dev.jlibra</groupId>
<artifactId>jlibra-spring-boot-starter</artifactId>
<version>0.0.1-SNAPSHOT</version>
</dependency>

The JLibra project can be injected into any Spring-based project like this:

@Autowired
private JLibra jlibra;

This JLibra object is preconfigured and “ready-to-go” by Spring Boot externalized configuration, eg. you can provide an application.properties file with the configuration:

jlibra.service-url=ac.testnet.libra.org
jlibra.service-port=80
jlibra.faucet-url=faucet.testnet.libra.org
jlibra.faucet-port=80
jlibra.gas-unit-price=0
jlibra.max-gas-amount=10000

Currently, the spring starter project creates a gRPC channel, but you are free to provide your own implementation or inject a channel from the existing environment.

Predefined Actions

There are predefined actions in the action package of jlibra-spring-boot-starter:

  • AccountStateQuery
  • PeerToPeerTransfer

These actions can just be @Autowired and are preconfigured as well, so you don’t have to deal with jlibra directly.
This is WIP and other actions will be added soon.

@Autowired
private PeerToPeerTransfer peerToPeerTransfer;

public void transfer(...) {
peerToPeerTransfer.transfer(...);
}

How the starter project can be used to speed up developement for Libra in Java is demonstrated by the java-libra-client project.

Accessing Libra from a Java based Shell

The showcase provides the same functionality the Rust CLI provides, but is implemented in Java using Spring Shell. Though being usable as a standalone client, the main focus of this project is to show the integration patterns for Libra from Java applications.

You can try the initial release here: https://github.com/ice09/java-libra-client/releases/tag/v0.0-alpha-1

Start it with java -jar java-libra-client-0.0-alpha-1.jar

Sample recording of java-libra-client with account creation, balance query and coin transfer

invite_me: Onboarding to Ethereum chains with keybase.io identity verification and EIP-712

This is not a Blockchain-bullshit-only post. Head to https://github.com/ice09/onboarding-eip712 for a sample implementation of a private Testnet onboarding with Twitter, Reddit or Github Keybase proofs.

The challenge…

Creating a private Ethereum Proof-of-Authority testnet is actually quite easy, just follow the instructions of Parity or, for testing purposes only, cliquebait, and you’re done.

But how do you onboard new users, who want to participate in your testnet, if they don’t have any ETH to start with and you still want to stay as close to your production environment in terms of gas usage as possible and therefore don’t want to enable gasless transactions in your PoA testnet?

…and the public testnet solutions

Kovan PoA

The public (Parity-powered) Kovan PoA solves this problem by providing a “Faucet Service” which is verified by

Rinkeby PoA

The public (Geth-powered) Rinkeby PoA solves this problem by providing a “Faucet Service” which is verified by

However, these methods require trust or establish dependencies on the testnet providers, which you will and can most likely not introduce in your private net. 

Analysing the problem

There are (at least) two different user states which a user can have in your PoA private testnet:

  1. The user has an authorized account in one of the authority node wallets.
  2. The user is connected to the private testnet, but initially has no balance.

In Proof-of-Work (PoW), the non-authorized user could start mining and get ETH for contract development. This is the Ropsten solution. In PoA, mining is not possible, so the user has no possibility to transact on the chain, except for using usd_per_tx=0 or similar for no gas costs on the authority nodes, which is possible, but contradicts the security measures established by having gas usage for transactions to minimize code execution, prevent infinite loops, etc. Even more, the setup on a test stage should always be similar to the production environment.

Recentralizing for “Almost Know Your Customer”

keybase.io does a great job in identifying users without revealing their real life identity if they don’t want to. But they make it quite difficult for attackers to create several identities. So a good enough solution for onboarding otherwise unknown users could be to bind their testnet accounts to their keybase users.

The basic idea and the following approach is inspired by Aragon and one of their great blog posts, which every Ethereum/Solidity dev should take a look at.

In our setup, we changed two crucial factors:

  1. removed dependencies on a product (Aragon) and on the Oracle (oraclize.it) and
  2. introduced a middleware component as a new dependency, but which you control and can (and have to) host on your own server.

keybase allows for publishing files per HTTPS by copying them to KBFS file system. Thereby, someone reading a file from https://KEYBASE_USER.keybase.pub/invite_me.json can be sure that the file invite_me.json was stored by keybase KEYBASE_USER.

But this is just one proof, how can we be sure that the user really has the private key for the address he wants to be registered? We can just sign the message with the private key and ecrecover in a contract. 

Using this mechanism, we can be sure that:

  1. If the URL https://KEYBASE_USER.keybase.pub/invite_me.json gives back the correct JSON, the user is who he pretends to be, since he had to copy invite_me.json to KBFS
  2. The address (private key) belongs to the user, since he signed the message in invite_me.json and the signature is ecrecovered in the invite_me contract.

MetaMask does a great job in helping signing data with your Ethereum’s account private key. However, up to EIP-712 the signed data has been displayed as a non-human-readable Hex string.

This is unacceptable, especially in crypto, a field of very easy and unavoidable fraud attempts. 

Therefore, we are supporting EIP-712 by signing with a MetaMask version supporting the new eth_signTypedData method. EIP-712 is a major step forward and should be the only way users are required to sign data with their Ethereum account’s private key.

The following overview shows the process which let a new user register his keybase user to a new generated address. Afterwards, 10 ETH are sent to this address. The user can only register once. However, the contract owner can manually unregister users if necessary.

For the ETH transfer to happen, the user must

  • have the invite_me.json as generated by the frontend part stored in his KBFS public directory, so that it can be retrieved by the server side process at the keybase.pub domain
  • have at least one of these Keybase proofs: Twitter OR Github OR Reddit

Introducing invite_me

The complete process is implemented in the “DApp” invite_me, which consists of a JavaScript frontend and a web3j-powered Java backend component called verifier.

After completing the steps, you can see the 10 ETH loaded into your account in MetaMask.

Let me try this!

First, checkout this. Then, make sure that you have an understanding how the authorization and verification works and what the Java component verifier does and how they depend on each other.

Last, install it locally and try it out. And, most importantly, please comment here or in the reddit post if it doesn’t work for you or if you have remarks about the approach, obviously this is work in progress.

What comes next?

We’d like to try two different approaches:

  • Using 3box as a more decentralized (but not as mature) keybase alternative
  • Realizing a different use case: “Almost KYC Airdrops” 

Excel2Blockchain with web3j

If you ever come to the conclusion, for whatever reason, that pushing Excel data to the Blockchain is what you need, we are happy to help!

We recently read about the Azure Blockchain Development Kit and had to cringe about the samples. But, who are we to judge those important enterprise business use cases requiring to get data from Excel to the Blockchain? 

Therefore, we took a look at the prerequisites, setup, initialisations and finally the execution of this with the new Microsoft Azure templates. And, to be clear, we love the idea of Microsoft getting involved into Blockchains and Ethereum and pushing Azure as a starter kit, that’s really great.

However, the whole procedure seemed quite long and exhaustive, that’s why we want to show how easy this is with standard development methods. Of course, these are lacking a lot of features and cool stuff you get with Azure then. But, they can help you to understand better what happens under the hood. So, let’s go for it.

No, wait, Excel to Blockchain, honestly? Why?

//TODO: Insert suitable reason here.

Ok then, but how?

Due to web3j it is really easy to connect to an arbitrary Ethereum Node, even Infura and Ganache, as simple as it is with web3.js or web3.py. All the other stuff is common Java dev tools, like Glueing with Maven, Excelling with Apache POI, etc. This is where Java shines, it’s really good at enterprisey integration stuff.

You can start directly and import the sources as a Maven project in your IDE or build it without an IDE and start it from the command line. See the details here.

Show me some code!

private void connectToBlockchain() throws IOException {
Web3j httpWeb3 = Web3j.build(new HttpService("http://localhost:8545"));
log.info("Connected to HTTP/JSON Ethereum client version: " + httpWeb3.web3ClientVersion().send().getWeb3ClientVersion());
// Create credentials from private key.
// Don't use with real (Mainnet) Ethereum accounts.
String privateKeyDeployer = "c87509a1c067bbde78beb793e6fa76530b6382a4c0241e5e4a9ec0a0f44dc0d3";
Credentials credIdentity = Credentials.create(pkDeployer);
BigInteger balance = httpWeb3.ethGetBalance(credIdentity.getAddress(), DefaultBlockParameterName.LATEST).send().getBalance();
log.info("Deployer address " + credIdentity.getAddress() + " has " + balance + " wei.");
}

private void deployExcelContractToBlockchain() throws Exception {
ExcelStorage excelStorageContractHttp = ExcelStorage.deploy(httpWeb3, credIdentity, ManagedTransaction.GAS_PRICE, Contract.GAS_LIMIT).send();
log.info("Deployed contract at " + excelStorageContractHttp.getContractAddress());
}

As you can see, ExcelStorage is a class which can used like any other Java class. It has been generated by a Maven plugin (web3j-maven-plugin) during build time. It is statically typed with methods resembling the Solidity functions of the smart contract and with all the nice aspects this has: compile time checks instead of runtime checks and brilliant IDE support (code completion, type checking, static code analysis).

With great IDE and build tool support, you can try this out yourself, you will be surprised how lightweight and easy Ethereum smart contract usage is from Java. Even more, integration with your existing (“legacy”) software and infrastructure is straightforward. 

Have fun!

Addendum: This is a Testsetup only! What about the real Blockchain?

Where does the private key in the current Java sources come from? 
If you start Ganache with the mnemonic mentioned, everytime the same accounts are generated. The first one being 0x627306090abaB3A6e1400e9345bC60c78a8BEf57. This address has one private key which is set into the Java sources to create the Credentials for contract deployment and transactions.

Setup for other test chains

  • Create an account with infura.io
  • Use the Endpoint URL from the dashboard and copy in into the Java sources

  • Create an account with MetaMask
  • Copy the address of your account

  • Copy the private key to the Java source code to use the account for contract deployment and transactions

Identity is the New Money

This post is inspired by the book with the same title, which we found in in this nice summary post by Alex PreukschatIn that blog post and especially in the book you will find all information about Self-Sovereign Identity. We will only refer to the concepts if they are feasible for our samples.

This is not a Blockchain-bullshit-only post. Head to https://github.com/ice09/identity for a sample implementation of the “federated” claim issuer sample.

The Story of Alice, Booze and her local Bank

Alice just turned 18 years old, being in good ol’ Germany she wants to celebrate this with a lot of booze, but her parents refused to buy it for her, since she can do this on her own now.

Currently, she can do this easily by showing her identity card when buying the drinks. The dealer would check her birthdate and verify her age.

This obviously has major shortcomings:

  • The ID could easily be altered, stolen or otherwise not belonging to the person showing it
  • The dealer gets a lot more information than necessary for this transaction, including a lot of sensitive personal data: name, address, height, etc.

So, the dealer would just have to verify that the person is older than 18 years, how could this work with Alice and eg. her local bank?

The book differentiates between identity and statements about this identity (claims), which entitle the person to do things. In our example, the fact that Alice turned 18 years old entitles her to buy and drink alcohol (in certain legislations). However, showing the identity card to the dealer mixes up two aspects very badly: the dealer has to check the identity information to derive the entitlement. This he has to do by himself, if for whatever reason (eg. the dealer not being able to calculate the age) he cannot derive the entitlement, Alice’s intention would be rejected. Also, only information which can be derived from the identity information can be used, if eg. Alice would be incapacitated and therefore not allowed to buy alcohol, the dealer wouldn’t know, since this information is not stated on her identity card.

In the world of cryptography and decentralized data storage (read: DLT), Alice’s bank could easily help her with an identity or better: entitlement verification service. Since Alice participated at the bank’s KYC process, the bank has a lot of information about Alice, one of these being her age. Now, if the local bank is trusted by the shop owner, this information would be enough for the owner: “As your local trusted bank, we verify that this customer is older than 18 years.”

Setup for Ethereum

In Ethereum, the easiest setup would be to use ERC 725/735. These standards (requests) provide all necessary equipment for identity creation and claims/endorse mechanism. Other standards and products are provided by uPort and standard bodies like the W3C DID.

The initial setup Alice has to execute, together with the claim issuers like the bank, a utility provider, an insurance company, etc. consists mainly of three steps which could easily be embedded into a mobile app:

  • Alice creates an identity ERC 725/735 (which might be one of multiple identities she has)
  • As a claim issuer, the bank adds a claim (using ERC 735 addClaim(…)) about Alice being older than 18 years. This claim is signed by the private key of the bank. The claim is stored with a returned ID and must be approved by Alice (1, 2).
  • Alice approves (using ERC 725 approve(claimId)) the bank’s claim on her identity.
  • Now, anyone can verify the claim about the topic she is interested in, eg. the fact that the identity is older than 18 years (4).

After the setup the process of verifying a claim is depicted below:

  • Alice signs a challenge of the shop owner with her private key (3).
  • With the derived public key of Alice, the shop owner can identify the claim in ERC 735 and check the signature of the trusted party, which is Alice’s bank (4), against the publicly available public key of the bank.

This would make a great use case for banks desperately looking for their next Blockchain use case: forget about value transfer, asset creation, settlement and whatnot, you are fighting against proven and established systems which might be embedded in complex processes but have been running stable, reliable and performant for years.

Identity verification on the contrary could be a completely new service offering and banks have the great advantage of having identified thousands of customers in strongly regulated processes, they have valuable data at their hand. The same is true for insurances, mobile providers, utility providers, governmental companies, etc.

But let’s say we want to decentralize even more, how could we go on?

Incentivizing Identity Verification

Couldn’t we incentivize people for doing something only they are able to do right: prove the identities of their social network?

There are some assumptions:

  1. People know their social environment best, especially their direct environment like family, best friends and neighbors.
  2. People want to earn (even few) money with something they – and only they – can do easily following a simple procedure

The process is almost equal to the process above, except that notaries are slashed if they defraud and earn fees if they behave according to the protocol. The slashing is complex and would have to be implemented by some governance mechanism. This step adds complexity, but is solvable and will be very similar to PoS. Most likely there will be a incentivization for detecting and proving fraud included into the protocol.

Incentivization is a great means for getting anonymous parties to agree on a fact. This works great for Blockchains and has also been proven theoretically by eg. game theory and the Nash equilibrium.

Therefore, with incentivization of notaries, their advantage of doing fraud is minimized.

Build a Social Network based on Trust

Trustlines is a decentralized, permissionless, open platform to host currency networks in which money is represented as IOUs issued by its participants. Trustlines implements money as bilateral peer-to-peer issued blockchain-based credit.

In the example above Bob gave Alice a creditline of 20 EUR and Alice Bob of 100 EUR, so they have bilateral creditlines which is a trustline. Bob gave Charly 30 EUR credit and Charly Bob 50 EUR. Now if Alice wants to pay Charly, whom she doesn’t know personally, the system finds a path between Alice and Charly (via Bob) and the creditlines will be used. If Alice pays Charly 10 EUR, Bob afterwards owes Charly 10 EUR and Alice owes Bob 10 EUR. The creditlines have been used and the remaining credit is lower or higher, depending on the direction of the payment.

There is no settlement in the system, if it is necessary or wanted by some party to settle, this is done outside of the system and then reflected in the system by an opposite transaction.

More information about the project is available on trustlines.network, we encourage you to inform yourself about its concepts of people powered money and complementary currencies and its potential to boost cryptocurrencies adoption and enhance accessibility for cryptocurrencies for all people.

Extra Trustlines Benefit: Socially proven Identity

However, for this blog post, a secondary aspect of Trustlines is more important: if you have a social network of trust, you have a social network of identities. This is immediately obvious by the simple fact that if you trust someone with a certain amount of money, you certainly know and trust this person, because you are in risk of losing money if your creditline is used. Depending on the social context this can be family members, good friends, co-workers and the amount of credit you give will mirror your amount of trust in these people.

Given a network with creditlines, people will most likely have creditlines with different people and a identity (and reputation) algorithm could value the amount of credit given and the number of creditlines for a probabilistic identity verification.

This might even be pseudonymous, as someone offering a service for money, I am most interested in a receiver to pay me money, which is related to his trustworthyness and therefore to the amount of creditlines given, not so much who this person actually is. If 10 people who have medium sized networks of creditlines have creditlines with the receiver, I am tending to assume that:

  1. This person actually exists
  2. This person is trustworthy for some amount to other people

For now, this is the only attribute of identity which Trustlines helps solving, but with very high accuracy: if a person actually exists and that this person is trustworthy to some degree.

We won’t have information about other identity attributes like gender, age, wealth, educational degrees and so on.

Let’s fix Identity now!

If you didn’t know a good use case for Blockchain technology, now you have it. This is one of the most natural use cases for a decentral, immutable database accessible by everyone without central control and the possibility to incentivize people.

This is a win for all involved parties, enhancing privacy and security and enabling service providers to make use of their already available data. As there is no “disruption” of any existing business model, but a creation of a completely new one, public identity management can only get better.

Token ERC Comparison for Fungible Tokens

“The good thing about standards is that there are so many to choose from.” Andrew S. Tanenbaum

Current State of Token Standards

The current state of Token standards on the Ethereum platform is surprisingly simple: ERC-20 Token Standard is the only accepted and adopted (as EIP-20) standard for a Token interface.

Proposed in 2015, it has finally been accepted at the end of 2017.

In the meantime, many Ethereum Requests for Comments (ERC) have been proposed which address shortcomings of the ERC-20, which partly were caused by changes in the Ethereum platform itself, eg. the fix for the re-entrancy bug with EIP-150. Other ERC propose enhancements to the ERC-20 Token model. These enhancements were identified by experiences gathered due to the broad adoption of the Ethereum blockchain and the ERC-20 Token standard. The actual usage of the ERC-20 Token interface resulted in new demands and requirements to address non-functional requirements like permissioning and operations.

This blogpost should give a superficial, but complete, overview of all proposals for Token(-like) standards on the Ethereum platform. This comparison tries to be objective but most certainly will fail in doing so.

The Mother of all Token Standards: ERC-20

There are dozens of very good and detailed description of the ERC-20, which will not be repeated here. Just the core concepts relevant for comparing the proposals are mentioned in this post.

The Withdraw Pattern

Users trying to understand the ERC-20 interface and especially the usage pattern for transfering Tokens from one externally owned account (EOA), ie. an end-user (“Alice”), to a smart contract, have a hard time getting the approve/transferFrom pattern right.

From a software engineering perspective, this withdraw pattern is very similar to the Hollywood principle (“Don’t call us, we’ll call you!”). The idea is that the call chain is reversed:  during the ERC-20 Token transfer, the Token doesn’t call the contract, but the contract does the call transferFrom on the Token.

While the Hollywood Principle is often used to implement Separation-of-Concerns (SoC), in Ethereum it is a security pattern to avoid having the Token contract to call an unknown function on an external contract. This behaviour was necessary due to the Call Depth Attack until EIP-150 was activated. After this hard fork, the re-entrancy bug was not possible anymore and the withdraw pattern did not provide any more security than calling the Token directly.

But why should it be a problem now, the usage might be somehow clumsy, but we can fix this in the DApp frontend, right?

So, let’s see what happens if a user used transfer to send Tokens to a smart contract. Alice calls transfer on the Token contract with the contract address 

….aaaaand it’s gone!

That’s right, the Tokens are gone. Most likely, nobody will ever get the Tokens back. But Alice is not alone, as Dexaran, inventor of ERC-223, found out, about $400.000 in tokens (let’s just say a lot due to the high volatility of ETH) are irretrievably lost for all of us due to users accidentally sending Tokens to smart contracts.

Even if the contract developer was extremely user friendly and altruistic, he couldn’t create the contract so that it could react to getting Tokens transferred to it and eg. return them, as the contract will never be notified of this transfer and the event is only emitted on the Token contract.

From a software engineering perspective that’s a severe shortcoming of ERC-20. If an event occurs (and for the sake of simplicity, we are now assuming Ethereum transactions are actually events), there should be a notification to the parties involved. However, there is an event, but it’s triggered in the Token smart contract which the receiving contract cannot know.

Currently, it’s not possible to prevent users sending Tokens to smart contracts and losing them forever using the unintuitive transfer on the ERC-20 Token contract.

The Empire Strikes Back: ERC-223

The first attempt at fixing the problems of ERC-20 was proposed by Dexaran. The main issue solved by this proposal is the different handling of EOA and smart contract accounts.

The compelling strategy is to reverse the calling chain (and with EIP-150 solved this is now possible) and use a pre-defined callback (tokenFallback) on the receiving smart contract. If this callback is not implemented, the transfer will fail (costing all gas for the sender, a common criticism for ERC-223).

 

Pros:

  • Establishes a new interface, intentionally being not compliant to ERC-20 with respect to the deprecated functions
  • Allows contract developers to handle incoming tokens (eg. accept/reject) since event pattern is followed
  • Uses one transaction instead of two (transfer vs. approve/transferFrom) and thus saves gas and Blockchain storage

Cons:

  • If tokenFallback doesn’t exist then the contract fallback function is executed, this might have unintended side-effects
  • If contracts assume that transfer works with Tokens, eg. for sending Tokens to specific contracts like multi-sig wallets, this would fail with ERC-223 Tokens, making it impossible to move them (ie. they are lost)

The Pragmatic Programmer: ERC-677

The ERC-667 transferAndCall Token Standard tries to marriage the ERC-20 and ERC-223. The idea is to introduce a transferAndCall function to the ERC-20, but keep the standard as is. ERC-223 intentionally is not completely backwards compatible, since the approve/allowance pattern is not needed anymore and was therefore removed.

The main goal of ERC-667 is backward compatibility, providing a safe way for new contracts to transfer tokens to external contracts.

Pros:

  • Easy to adapt for new Tokens
  • Compatible to ERC-20
  • Adapter for ERC-20 to use ERC-20 safely

Cons:

  • No real innovations. A compromise of ERC-20 and ERC-223
  • Current implementation is not finished

The Reunion: ERC-777

ERC-777 A New Advanced Token Standard was introduced to establish an evolved Token standard which learned from misconceptions like approve() with a value and the aforementioned send-tokens-to-contract-issue.

Additionally, the ERC-777 uses the new standard ERC-820: Pseudo-introspection using a registry contract which allows for registering meta-data for contracts to provide a simple type of introspection. This allows for backwards compatibility and other functionality extensions, depending on the ITokenRecipient returned by a EIP-820 lookup on the to address, and the functions implemented by the target contract.

ERC-777 adds a lot of learnings from using ERC-20 Tokens, eg. white-listed operators, providing Ether-compliant interfaces with send(…), using the ERC-820 to override and adapt functionality for backwards compatibility.

Pros:

  • Well thought and evolved interface for tokens, learnings from ERC-20 usage
  • Uses the new standard request ERC-820 for introspection, allowing for added functionality
  • White-listed operators are very useful and are more necessary than approve/allowance, which was often left infinite

Cons:

  • Is just starting, complex construction with dependent contract calls
  • Dependencies raise the probability of security issues: first security issues have been identified (and solved) not in the ERC-777, but in the even newer ERC-820

(Pure Subjective) Conclusion

For now, if you want to go with the “industry standard” you have to choose ERC-20. It is widely supported and well understood. However, it has its flaws, the biggest one being the risk of non-professional users actually losing money due to design and specification issues. ERC-223 is a very good and theoretically founded answer for the issues in ERC-20 and should be considered a good alternative standard. Implementing both interfaces in a new token is not complicated and allows for reduced gas usage.

A pragmatic solution to the event and money loss problem is ERC-677, however it doesn’t offer enough innovation to establish itself as a standard. It could however be a good candidate for an ERC-20 2.0.

ERC-777 is an advanced token standard which should be the legitimate successor to ERC-20, it offers great concepts which are needed on the matured Ethereum platform, like white-listed operators, and allows for extension in an elegant way. Due to its complexity and dependency on other new standards, it will take time till the first ERC-777 tokens will be on the Mainnet.

Links

[1] Security Issues with approve/transferFrom-Pattern in ERC-20: https://drive.google.com/file/d/0ByMtMw2hul0EN3NCaVFHSFdxRzA/view

[2] No Event Handling in ERC-20: https://docs.google.com/document/d/1Feh5sP6oQL1-1NHi-X1dbgT3ch2WdhbXRevDN681Jv4

[3] Statement for ERC-20 failures and history: https://github.com/ethereum/EIPs/issues/223#issuecomment-317979258

[4] List of differences ERC-20/223: https://ethereum.stackexchange.com/questions/17054/erc20-vs-erc223-list-of-differences

 

 

Blockchain + Streaming Analytics = Smart Distributed Applications

We are really pleased to publish a guest contribution by Kai Wähner about Smart Distributed Applications.
Kai is Technology Evangelist and Community Director for TIBCO Software. His expertise lies within the fields of Big Data, Advanced Analytics, Machine Learning, Integration, SOA, Microservices, BPM, Cloud, Internet of Things, Blockchain and Programming Languages such as Java EE, Scala, Groovy, Go or R. He regularly writes about new technologies, articles and conference talks on his blog.

We are approaching Blockchain 3.0 these days. You ask “3.0”? Seriously? Blockchain is still in the early adoption phase! Yes, that is true. Nevertheless, while we are moving forward to real world use cases in various industries, we see that you need more than just a blockchain infrastructure. You need to build decentralized applications. These include blockchains, smart contracts plus other applications plus integration and analytics on top of these solutions.

Middleware is Key for Success in Blockchain Projects

Blockchain is the next big thing for middleware! There is no question around this. You need to interconnect other applications, microservices and cloud offerings with a blockchain infrastructure to get real value out of it. In addition, visual analytics and machine learning have to be leveraged to find insights and patterns in blockchain and non-blockchain data. Finally, streaming analytics is used to apply these insights and patterns to new events in a blockchain infrastructure. There is a variety of use cases like fraud detection, compliance issues, optimization of manufacturing or supply chain processes, or any kind of scenarios with the Internet of Things (IoT).

Reference Architecture for Blockchain and Middleware

Variety of Blockchain Platforms including Hyperledger and Ethereum

The blockchain market is growing significantly these days. You need to think about various blockchain characteristics for your next blockchain project.

  • Who are the users of the blockchain? Is it public or private? Which partners do you need to work with?
  • Do you want to build your own blockchain infrastructure (based on a framework such as Hyperledger and one if its implementations like IBM’s Hyperledger Fabric, Iroha, Intel’s Sawtooth Lake) or leverage an existing platform (like Ethereum)?
  • Do you want to host the infrastructure yourself or leverage a cloud service like IBM’s Bluemix with focus on Hyperledger or Microsoft’ Azure with focus on Ethereum?
  • What development environment and tooling do you want to use (like Truffle or BlockApps on top of Ethereum)
  • Which characteristics are important for your scenario? What about speed, security, consensus algorithms, integration of non-blockchain services, and other important aspects? Maybe an add-on on top of a blockchain is needed, like the Raiden Network to leverage off-chain state networks to extend Ethereum with some nice properties like scalability or high performance for asset transfers?
  • Or do you want to focus on a industry-specific blockchain solution like R3 Corda or Ripple for financial services?
  • What middleware do you need? Do you need Application Integration or API Management to interconnect everything? Visual Analytics to find insights and patterns in historical blockchain data? Streaming Analytics to apply rules to action in (near) real time for new blockchain events?

The following shows how to leverage Streaming Analytics together with blockchain events. This example uses TIBCO StreamBase in conjunction with the public Ethereum test network. Note that similar scenarios can be build with any other blockchain infrastructure. A follow-up post about how to leverage middleware with Hyperledger will come soon, too.

Streaming Analytics for Correlation of Blockchain and Non-Blockchain Events

The scenario uses a Smart Contract to define a Coin system. You can mine coins and transfer them to other users (i.e. blockchain addresses). This example is similar to Bitcoin concepts to show how to leverage streaming analytics with any custom blockchain application and smart contracts. The goal is not to show the power of smart contracts (other articles are available for this). The programming language used to develop this Smart Contract is Solidity; more or less the de facto standard to write smart contracts for Ethereum.

Here is the Smart Contract built and deployed with Browser Solidity:

Smart Contract ‘Coin’ developed and deployed with Browser Solidity

MetaMask, a bridge to run Ethereum dApps in your Chrome browser, is running in the background to connect to the Ethereum network and commit the transactions developed with Browser Solidity. You could also use Streaming Analytics to deploy smart contracts, of course. However, in this example TIBCO StreamBase was only used for the following two parts:

  • Receive new events from the blockchain network: You can filter, aggregate, analyse or transform any events like pending transactions, logs or blockchain blocks – and also combine this information with non-blockchain events, of course. For example, you could build a streaming analytics process to analyze just the logs relevant for your specific transaction IDs to spot issues and act proactively, let’s say if a pending transaction takes too long or fails.
  • Execute transactions on the blockchain network via smart contracts: You can mine new coins, send coins to other blockchain addresses and also check the balance of an address. Anything what the smart contract allows can be included into the streaming analytics process.

The streaming analytics process monitors all Ethereum events continuously. This is not as trivial as you might know it from classical messaging systems. You cannot just listen to a topic or queue, but you have to pull information out of the blockchain. Depending on the use case, you have to implement some solution which solves your problem but also does not consume too many resources. This is always a trade-off, which has to be thought through when building your streaming analytics process. This also highly depends on the blockchain infrastructure you use and its feature set.

Please note that security considerations are not part of this example. In the real world, you would integrate encryption and other security requirements into the streaming process, of course. In this demo, we use “hardcoded” private keys for sending transactions. A no-go in a real world project.

Let’s now take a look at an implementation of this process.

TIBCO StreamBase + Ethereum Blockchain

Here is the demo setup:

Architecture with TIBCO StreamBase + Ethereum Blockchain

The Ethereum test network is a distributed peer to peer ledger. It runs on various Ethereum clients. We used one of the most mature ones on our local laptop: The geth client implement in Golang. This is synced and also part of the Ethereum test network.

TIBCO StreamBase is used to build the streaming analytics process:

TIBCO StreamBase Connectors for Ethereum

The web3j Java API is used to connect TIBCO StreamBase with the Ethereum network through our local geth client. You just need to write the connector once and can reuse it in all your streaming processes via drag&drop and configuration. These behave “just” like any other connector (such as messaging via MQTT or Apache Kafka)  and components to build streaming logic (like filter, aggregate or transform).

For more details, please check out my live demo of combining streaming analytics and Ethereum blockchain

Building this process was actually a pretty easy task with TIBCO StreamBase. In the same way, you can build much more sophisticated blockchain processes in your real world project. Let’s also think about some other next steps.

Next Steps: Application Integration, API Management, Machine Learning, and more

A real world blockchain projects needs streaming analytics to correlate blockchain and non-blockchain events to fight fraud or compliance issues, to improve efficiency in manufacturing or supply chain processes, to combine Internet of Things with blockchain infrastructures, and for many other use cases.

Though, Streaming Analytics is just one piece of the puzzle. Here are some more thoughts about why you might combine blockchain with middlware and analytics:

  • Live Visualization for Real Time Monitoring and Proactive Actions
  • Cross-Integration with Ethereum and Hyperledger Blockchains
  • Data Discovery for Historical Analysis to Find Insights and Patterns
  • Machine Learning to Build of Analytic Models
  • Application Integration with other Applications (Legacy, Cloud Services, …)
  • API Management to expose blockchain services and handle caching / throttling challenges
  • Native Hardware Integration with Internet of Things Devices

I will do more posts about these ideas and show more live demos in the next weeks and months. In the meantime, first customer projects also kicked off, already. Blockchain and middleware have a great and interesting future…

Keywords:

Blockchain, Ethereum, Hyperledger, Middleware, Integration, Streaming Analytics, TIBCO, StreamBase, Live Datamart, Smart Contracts, Cloud, web3j

Static Type Safety for DApps without JavaScript

DApps, starting professionally…

You might not be aware, but despite its similarities to JavaScript, Solidity is actually a statically, strongly typed language, more similar to Java than to JavaScript.

solidity
static type check in browsersolidity

…and ending in frontend-chaos

Sadly, for a long time, there has only be one interface to Ethereum nodes, web3.js (besides JSON/RPC), which is, as its name implies, written in JavaScript.

Though providing this API in a web-native language is really a brilliant idea in terms of fast development, seperation of concerns and ease of use, it is a nightmare for professional, multi-developer, multi-year, enterprise products.

You may not agree with me here, but as there are currently no 10 year old 1.000.000 LoC enterprise projects in node.js/JavaScript out there, you should at least consider that such projects are nearly impossible to maintain with a dynamically, weakly typed language like JavaScript (JS).

So, we have this situation, where JS defines the lowest common denominator (dynamically, weakly typed

JavaScript_1 (2)

when we really would like to have this situation, where Java (C#, Haskell) defines the lowest common denominator (statically, strongly typed)

JavaScript_2 (2)

Removing chaos

The problem is was, that up to now only web3.js existed. However, today there is also a web3.py (which is Python and therefore at least strongly typed, but still dynamically) and, brandnew, web3j.

With the latter, we can easily model the call chain above, where we only use statically, strongly typed Java and omit JavaScript altogether. Welcome to hassle-free integration into existing Java/JEE-environments without workarounds. Finally: using the Ethereum Blockchain with Java.

If you want to actually get deeper and use Java with no RPC at all, you can also switch to EthereumJ, which is a Ethereum Node implemented in Java, like Eth (C++), Geth (Go), PyEthApp (Python) or Parity (Rust). It is crucial to understand the difference between web3j and EthereumJ. If you just want to use some Ethereum Node from a Java application, web3j is your choice, you are limited to the Web DApp API then, which should be enough for all “Ethereum user” use cases.

We will not explain in detail how to use web3j, it should be familiar to any Java developer how this library can be used just by adding Maven-dependencies to your project.

Fixing the front-end

We could stop here, since using JavaScript for the frontend is not really problematic and a common use today.

However, if you use JavaScript in your frontend, it might really make more sense to stay with web3.js. So, we want to go further: how are we going to create the GUI if we want to have no JavaScript at all?

This is just a PoC, but if you think of any other client to the Ethereum Blockchain other than a web site (let’s say: Batchjobs, Web Services, Message Queues, Databases, other proprietary software with Java adapters (there are some!)), this should make sense to you – you really wouldn’t want to use them from web3.js (hopefully).

Using templates: Thymeleaf and Spring Boot for slim enterprisy software

We will do a step-by-step guide for creating a No-JS-Dapp. Even without any Java experience, you will be able to follow without problems. Java is not that complicated anymore!

  • Get an infura.io account and key, so you don’t have to mess around with starting your own Ethereum node
  • Clone this repo: https://hellokoding.com/spring-boot-hello-world-example-with-thymeleaf/
  • Install Maven
  • Edit these files:

    pom.xml (add these dependency to section dependencies and add the repo, beware that web3j is a fast moving target, check for new versions)

    <dependency>
    <groupId>org.web3j</groupId>
    <artifactId>core</artifactId>
    <version>0.2.0</version>
    </dependency>

 

<repositories>
<repository>
<id>oss.jfrog.org</id>
<url>http://dl.bintray.com/web3j/maven</url>
</repository>
</repositories>

src/main/resources/templates/hello.html (change name to balance.html)

<!DOCTYPE html>
<html lang="en" xmlns:th="http://www.thymeleaf.org">
<head>
<meta charset="UTF-8"/>
<title>Your Static Strongly Typed Wallet</title>
</head>
<body>
<p th:text="'The balance of account ' + ${address} + ' is ' + ${balance}" />
</body>
</html>

src/main/java/com/hellokoding/springboot/HelloController.java (change name to EthereumController.java)

@Controller
public class EthereumController {

@RequestMapping("/balance")
public String balance(Model model, @RequestParam(value="address", required=false, defaultValue="0xe1f0a3D696031E1F8ae7823581BB38C600aFF2BE") String address) throws IOException {
Web3j web3 = Web3j.build(new HttpService("https://consensysnet.infura.io/{YOUR_INFURA_KEY}"));
EthGetBalance web3ClientVersion = web3.ethGetBalance(address, DefaultBlockParameter.valueOf("latest")).send();
String balance = web3ClientVersion.getBalance().toString();
model.addAttribute("address", address);
model.addAttribute("balance", balance);
return "balance";
}

}

…that’s it. Start with mvn spring-boot:run

If you encounter an connection/handshake error, you may have to import the infura certificate into your local Java keystore (I didn’t have to)

$JAVA_HOME/Contents/Home/jre/bin/keytool -import -noprompt -trustcacerts -alias morden.infura.io -file ~/Downloads/morden.infura.io -keystore $JAVA_HOME/Contents/Home/jre/lib/security/cacerts -storepass changeit

Look Ma! Displaying the wallet balance with no JavaScript!

You can call the spring-boot web application with http://localhost:8080/balance (then the defined default argument is used) or with your address (in the consensys testnet) as parameter address= 

walletOf course, you can change the Ethereum net like you want in file EthereumController to morden or mainnet, just read the welcome mail from infura.io. Or you can just use a local Ethereum node like geth with RPC enabled (geth –rpc) and http://localhost:8545 as the constructor for HttpService of the Web3j-Factory in EthereumController.

Have fun, with or without JavaScript!

Create your own Oracle with Ethereum Studio and oraclize.it

This is a follow-up post to Gambling with Oracles and describes the technical part for the implementation of an Oracle with oraclize.it

People told us last time that we had not really created an binary option. This might be true and to not confuse the reader and also to introduce just another cool use case and link to current events, we are going to bet on the winner of the EURO 2016 soccer championship.

As you see from the posting date, the championship is already finished and the winner is well known, it is Portugal.

So we can just create a static Oracle to make things easier, we will define the Oracle call in our betting contract and on calling update(), the winner will be determined, which is the static json {“winner”: “portugal”}.

You can call bet(winner) with exactly 1 ETH and on calling claim() you will get nothing or (loseBets + winBets) / (numberOfParticipantsWhoBetOnWinner), eg. 3+2/2 = 2.5

BettingOracle1

Prerequisites

You will need to sign up for the ether.camp Studio and check out the corresponding github project in the terminal, follow the instructions in the README.

Starting the Oracle

After starting the web server, you can access the “Oracle” with https://yourid.by.ether.camp:8080/web/euro2016winner.json which redirects to the file web/eu2016winner.json in your project.

You can test the data retrieval in the Studio, just choose the Oraclize tab, Test query, “URL” and put json(…).winner around the URL.

oracle1

That’s it, by clicking “Test” you should get the result “portugal”. So now the Oracle Smart Contract can call your Oracle datasource and submit the result via callback to your contract.

oraclizeObviously, a real Oracle should be a little more dynamic, but for testing purposes this Oracle is enough. You can change the answer by stopping the web server, changing the file manually and restarting the server.

To avoid duplicating information here, please consult the documentation for the ether.camp Studio IDE for actual usage of the sandbox. Remember to give enough value (at least 0.1 Ether in Wei: 100000000000000000) for the update() call, but also 1 Ether in Wei (1000000000000000000) for the bet()-calls, otherwise it will not work.

Deploying the betting contract to Morden

The contract.sol contains the complete contract, it imports some helper classes and the oraclize-API, which, depending on the deployed environment, is instantiated with the corresponding contract address.

In the README you will see the local environment sample, but also the Morden example. To deploy the contract to the testnet Morden, you will have to get Ether first, which is really simple in Morden

Note: you have to set the networkID to Morden in the constructor of the contract before deploying:

 function FinalWinner() {
oraclize_setNetwork(networkID_morden);
}

send2morden

After deploying, you can click on the Transaction-Hash in the Transaction-Tab, which automatically opens the deployed contract in morden.ether.camp.

To be able to invoke the contract in ether.camp, you will have to upload the sources of the contract, however, there is a problem with inline assembly at the time of writing this post. Therefore we added a special file (for-upload-to-ether-camp-morden-as-source-only.sol-not.executable) for uploading in the root of the github-project, this contract is just for this purpose and is not executable.

In Morden, each call should be done with value 1 Ether, the bet-calls will throw; otherwise and the update call needs some for the call of Oraclize, which costs Ether for each call (consult pricing on oraclize.it).

Distributing Business Processes using Finite State Machines in the Blockchain

Having disctinct, even competing organisations with a common goal and no shared technical infrastructure represent a perfect use case pattern for blockchain usage.

The quest for the holy business use case

As many blockchain enthusiasts we are constantly searching for use cases for establishing Distributed Ledger Technology (DLT) or even real blockchain technology in enterprises as solutions for common business problems.

21666276190_40647d8327_z
[Business *]

This turns out to be difficult, since each single aspect of this technology is already solved by several – established and well known – products. 

In this post we will present a use case, or even a use case pattern, which we think is ubiquitous and can best be solved by blockchain technology.

For these type of use cases the other technologies, even though being more mature, seem themselves like workarounds for the natural technological solution, which is blockchain-based.

Our definition is the use case pattern is:

Several distinct, competing organisations have a common goal and don't share technical infrastructure.

distinct, even competing organisations

Distinct organisations are not related to each other and therefore have no existing technical or organisational processes. To pursue a common goal, these processes would have to be established first, which is costly and time-consuming.

5882566863_ce0ccf745d_z
[Competing teams *]

The second aspect, competition, implies that there is no mutual trust. Obviously, this aspect is a crucial one. Blockchain technology might even make sense if just the other aspects of the pattern match, but it only is a perfect fit if this aspect is important, since the blockchain itself is inherently trust-less.

common goal

This is the fundamental requirement, if there is no common goal, there is no need to establish any collaboration. It is important to note, that the common goal is most likely not related to the core competence of the organisations, but to some crosscutting concerns, which have to be addressed by all organisations, but are just cost factors.

no existing shared technical infrastructure

To put it contrary: if there already exists a shared technical infrastructure, also organisational processes exist to establish new technology. Given this, there are several great technology products which implement each aspect of the blockchain technology (namely distributed data storage, immutability, security), and most likely even more performant, more mature, with less maintenance and evident responsibilities.

Ok, this is quite abstract and complex, let’s find an easy example.

…want a securities account with your new account?

There is one universal retail bank, FriendlyBank, which wants people to open accounts. And there are three deposit banks, FriendlyDeposit, EasyDeposit and NiceDeposit. There is also the Regulator, who must be able to audit all transactions between the parties.

Requirement

common goal: make money

The common goal of universal and deposit banks is to open accounts. The universal bank can be intermediary to deposit account opening, so you can open a securities account at one of the three deposit banks with your new account at FriendlyBank. This way, it is a win-win-situation for both parties, the intermediary gets commissions, the deposit banks get new customers.

FriendlyBank and FriendlyDeposit are related to each other, they have the same ownership structure, but are separated entities. Since the deposit banks are in competition with each other, the other deposit banks besides FriendlyBank want to be sure that not very valuable customers are transferred to FriendlyDeposit or the deposit account opening is “accidentally forgotten”, if not FriendlyDeposit is chosen.
Since the parties do not really trust each other, a trust-less (ie. no trust necessary) solution is needed.

the wonderful world of finance IT today

As we said, the depicted scenario exists quite often. Also, technical “solutions” for these scenarios exist.

They look similar to this one:

Flatfilesolution_crop

We wan’t go in detail what all this means, but to become trust-less, as well as assure that the data is in sync in each organisation, a lot has to be implemented and still, shortcomings of these solutions are quite common:

  • reconciliation processes are always necessary, but still, due to missing transaction support in flat file exchange, which is almost always chosen as the “simplest” integration pattern, errors occur.
  • adapters have to be built for each party, so there will soon exist a many-to-many problem.
  • special adapters have to be built for regulators.
  • crosscutting concerns have to be implemented: security, authentication, auditing, transaction support

Compared to a natural solution in the blockchain

Below is the blockchain solution, it fits naturally and implements all requirements.bcsample

Abstracting from the use case

Why is this solution matching so well, what is the pattern “behind”?

Different actors change the state of a business process on some event in a transactional way.

2645030_8e2b2269
[Old cigarette machine *]

You certainly know this pattern, it is describing a finite state machine, more exactly a distributed, secured by cryptoeconomics finite state machine, which events are transactions.

After all, the blockchain itself is a state transition system, it somes quite naturally to implement a simple state machine on the Ethereum EVM, it even is mentioned as a common pattern in the Solidity docs.

 

Approaching the finite state machine

How would the organisations collaborate? This could look like this informally:

 

process_v1

or like this (more) formally:

Blockchainsolution_FSM

so we actually can built this state machine and its transitions really easy using the blockchain, here in pseudo-Solidity, derived from the common pattern in the Solidity docs:

contract AccountDepositStateMachine {
enum Stages { Init, AccountOpened, DepositOpened, DepositConfirmed, AccountConfirmed } Stages public stage = Stages.Init; modifier atStage(Stages _stage) { if (stage != _stage) throw; _ } function nextStage() internal { stage = Stages(uint(stage) + 1); } // Order of the modifiers matters here! function openAccount() atStage(Stages.Init)
transitionNext
{ if (msg.sender != FRIENDLY_BANK) throw; } function openDeposit() atStage(Stages.AccountOpened)
transitionNext
{
if (msg.sender != FRIENDLY_BANK) throw;
}
function confirmDeposit()
atStage(Stages.DepositOpened)
transitionNext
{
if (msg.sender != DEPOSIT_BANK) throw;
}
modifier transitionNext() { _ nextStage(); } }

Great, so that’s it, we can build any business process using the blockchain, QED.

Leaving the ivory tower

As you might guess, it is not that simple. Let’s review the actual process:

  1. Account is opened 
  2. Deposit Account is opened
  3. Deposit Confirmed
  4. Account Confirmed

Ever saw an IT-process in real life? Exactly, it’s not like this, not even close. Let’s see a more real life example:

  1. Account is opened
  2. An Email is send to whoever feels responsible
  3. The Product Owner’s Excel has to be updated
  4. The new reporting engine must have its data updated, it is runnig with MongoDB, which has to be updated
  5. Revision wants to have it’s auditing data in their Oracle DB (must use 2PC)
  6. … (ok, you got it already…)

None of the tasks in bold red can be accomplished from the Ethereum blockchain, since calls from “inside” to “outside” are prohibited. Let’s hope this will never change, the separation of “inside” and “outside” is essential for the stability and security of the Ethereum blockchain

We could use Oracles for this, but it would be a kind of usage which is highly inappropriate for this type of external information retrieval.

Teaser: “Mastering the Flow: Blending Reality with the Blockchain”

This post is way too long already, so here comes a teaser for two posts next to come: “Mastering the Flow: Blending Reality with the Blockchain”

The idea is really simple: let’s just recentralize the business process (we are in eager anticipation for the comments to come…)

But we think this can make sense. Look at the sample above, illustrated as a flow:

nodered

The flow was created with Node-RED, an excellent and highly undervalued tool of IBM Emerging Technologies for flows in IoT. It could be very easily adapted to Ethereum with smart contract access by usage of web3.js, which itself can be integrated in node.js, which is the basis of Node-RED, hence the name.

We make the following assertion:

More than 80% of all use cases can be realized by a centralized business process engine as a first layer, eg. implemented in Node-RED, and a state machine implemented in the blockchain as a second layer

The business process engine is the glue between the transactional blockchain state transitions and the secondary business processes.

What do you think of it? Please let us know in the comments, we really appreciate feedback, positive or negative.

And stay tuned for the next episode, we will get to the nitty-gritty there.

images:
[Business] Business by Christophe BENOIT under CC
[Competing teams] Competing teams by Denis De Mesmaeker under CC
[Old cigarette machine] Old cigarette machine by Walter Baxter under CC