top of page

KRYPTOKEISARIT

Blogi: Welcome
Blogi: Blog2
  • Writer's pictureDiligence dude

2. A "brief" description of Iota technology




As I begin to become acquainted with Iota, I come across many very unfamiliar and difficult to understand terms and ideas. Even though you would be deep in the world of blockchains, Iota might be bit more difficult to approach and its solutions might be unfamiliar at first. In this paper, I will try to give a very simplistic picture of what Iota's technology is all about, and further explain it without having to have a deep understanding of the more traditional blockchains. Inevitably, I have to pull the bends straight and probably the description is both bit inaccurate and partly incorrect. The Iota Foundation website provides access to numerous blog posts that go into much more depth and accuracy in different areas of Iota’s technology. You no longer need to be a doctor of mathematics that Iota’s technology can be well understood. However, before you start investing more time or money in this, here is a brief description of what Iota is all about.


First, though, I could start with blockchains and DLT. DLT (Distributed Ledger Technology) refers to a network where the same data is shared between multiple machines or devices (= node). The network communicates with each other about the state of the data and if a node tries to change the data the network will notice a different version and vote between the two different versions which will remain in power. Thus, simply storing data in multiple locations is not enough, and the network must also be able to build a consensus between two different versions of data and decide which version remains in power. The basic idea is that the majority decision remains in effect, and therefore because the majority has previously approved a particular version of the data, no edited or new version of the data will remain in effect unless the newer version for some reason receives the majority.


This can happen if the creator of the new version owns a larger share of the network than everyone else, either by hacking other machines or by owning a simple majority of the nodes. Owning a majority of nodes is usually too expensive in a well-designed DLT network, and multi-node hacking is difficult at the same time, since nodes are a heterogeneous group of very different devices, so it is not enough to break one kind of protection. There are many different types of attack on a DLT network, and the consensus mechanism is the way in which a decision on the data version that remains in effect is made. The consensus mechanism should be able to counteract various attacks as effectively as possible. A blockchain is one way of organizing data and building consensus on a DLT network. One traditional way to attack a DLT network is to play multiple nodes on a single device. If only the nodes were counted, it would be very easy to arrange such an attack. It is very difficult for other nodes to know whether one device rotates one or a thousand nodes. DLT networks try to prevent such an attack by defining the majority of the network using some resource that is not available or very difficult to replicate.


The traditional blockchain network uses computing power (= Proof of Work, or PoW) as a resource. In the block chain, network members are divided into two groups. One group calculates difficult equations, and the other group uses the network, for example, to make wire transfers. The network information is divided into blocks (= block), so if you want to make a bank transfer it must fit into one block. In the numerator group, the one who gets to solve the equation first creates a new block and can choose which bank transfers to take with them. This block is linked to the previous solved block, and it becomes possible to calculate a new difficult equation beyond this newly created block.


After creating a new block, other counters can either continue their previous task and try to quickly create another competing block alongside the newly created block, or alternatively, the counters can jump to calculate the equation that starts from this newly created block. The blocks form a chain, and the chain that is the longest forms the current truth of the net. You can try to change the 10 blocks of a previously written bank transfer and create a spot adjacent to your own competing block and your own version of 10 blocks from a previous bank transfer, but then you need to quickly create a 10 block chain to get past the others and it would require huge amount of computing power. Therefore, almost everyone always jumps to the last block and tries to find the next block in it.


If the wire transfer has already fallen four or five blocks from the tip, it is very difficult to create another version of this wire transfer. On the other hand, if you try to create an impossible wire transfer at the top and, for example, transfer money from an account that does not have it, this wire transfer will be detected incorrectly and rejected online. There are good videos of this on youtube if the explanation was too vague. People who use the network then wait for someone to worry about their wire transfer to the newly created block and are ready to pay for it. Often also block builders in block chains are sometimes rewarded with new currencies that have appeared from scratch. So, for example, when creating a bit of bitcoin there is a certain probability that you will get a whole bitcoin from it.




It is good to stop for a moment here. What does it mean if the network is shared between calculators and users? First, as the share of network users increases, the new coins received by the calculators will not be enough, but users will have to start paying to the calculators. Some time ago, if you wanted to buy a pizza with bitcoin, you would have paid 9 € for a pizza and 20 € for a bitcoin transfer fee. Thus, such a allocation can never work in the IoT world if the purpose is to make really small transactions like one thousandth of a cent wire transfers. On the other hand, Bitcoin uses the same amount of electricity per year as the Swiss state, all for the purpose of avoiding that attack described earlier.


It is also good to see what it means to have a network divided between calculators and users. Nowadays, calculators are no longer desktops owned by nerds, because Bitcoin algorithms, when calculated on a desktop computer, consume much more electricity than computing brings money. Instead, Bitcoin's algorithms are run by so-called ASICs. ASIC is roughly the code written directly on the microchip, thus removing any extra transistors. Another explanation of the application-specific integrated circuit (ASIC) is a microchip designed for a special application, such as a particular kind of transmission protocol or a hand-held computer. You might contrast it with general integrated circuits, such as the microprocessor and the random access memory chips in your PC. Doing calculations with ASIC’s will give you more than 100x efficiency at calculation speed and it helps a lot of power consumption, but such a circuit can do no more than one task. If Bitcoin's algorithm were to change at all, these very expensive ASIC’s would not even qualify as a pocket calculator. That leads to a problem that calculators ultimately determine what version of the software they want to compile, so if the network is divided into calculators and users, its technological development is bound to the calculators and only development steps that do not threaten the calculators can be accepted.


Now we finally get to Iota and the world of IoT. The data structure of Iota is not a chain but a Directed Acyclic Graph (= DAG). There are good youtube videos out there and much better explanations, but the basic idea is that the network is not shared between the calculators and the users, but that each user calculates network a little bit forward. When you make a bank transfer, you review the two previous bank transfers and make sure there is no conflict between them. You will then link your own bank transfer to the two previous ones and wait for someone to come in and secure your bank transfer and link to you. If you make a mistake and join a bank transfer with a conflict the following will notice it and will no longer link to you. Then your branches will not grow. When properly calculated, the other transitions are all interconnected to form a ribbon-like network. You should watch the video here. The essential difference here is that there is no need for fees, only little computing power and there is no one who would benefit from calculating the network further other than being able to make wire transfers. If there is found a way to make the network more efficient and less energy consuming, everyone will be happy to receive the upgrade.


(Addition: In blockchains, decision-making is shared among users, calculators, exchanges, developers, and the ecosystem. Iota has no calculators like blockchains do, so there are fewer conflicts over accepting updates to the system. However, if Iota becomes a generic IoT standard, developers of IoT devices will create a lot of ASIC-based IoT devices, forming a new group that will not tolerate change so easily. At this point, Iota's development may well begin to be as rigid for change as today's Bitcoin. So it is important that major advances to the protocol are made before Iota can become a global standard.


Initially, it was thought that a sufficient number of users alone would be enough to build a reliable consensus on the network, but now more research has been done and a new consensus mechanism combining many different mechanisms to combat various attacks is being finalized.


A test version of this new consensus mechanism (Goshimmer which is Coordicide frist protoype) is coming in spring 2020 or at latest summer time 2020 (first version without value transfers are actually out now but we are waiting value transfers and mana to be added also in it). (After this, the official version will be carefully coded and validated before the current main network is migrated to the new network.) Iota’s development started from idea to make a new kind of hardware for the IoT world. The founders tried to create a way to do decentralized computing effectively. This is a very broad topic, but in a nutshell: just as you can do to keep the DLT network data intact, you can also create a network where the machines work together to calculate a given calculation and together they create a consensus on the final correct answer. In the crypto world the term "smart contract" are being used commonly, which means shortly that certain nodes agree something with each other and other nodes on the network will make sure that agreed things will happen by calculating the data relating to this agreement together. For example, when a machine transmits data to the network, the receiving party has previously agreed to transfer a certain amount of currency to the data provider. The network can take care of this currency transfer when the requested data is received. Usually, in the cryptocurrency, smart contract means that majority of the network is involved in computing consensus, and in addition, computing is done on current computers, on current hardware, and the starting point is simple: Distribute the same task to everyone and reach network consensus on the right answer. This leads to a very cumbersome system and e.g. the simple cryptokitties cat collecting game got the world famous Ethereum smart contract network on its knees when a little more people became enthusiastic about playing the game. Ethereum initially advertised that it was a global computer, but at least for the moment its technology is far from its original purpose.


So, Iota has been driven by the need to bring distributed computing to the IoT world. When it became apparent that a separate network was also needed to distribute the payments to the machines participating in the distributed computing, Iota’s Tangle (the current DAG network that was initially discussed before) was created. The decentralized computing network itself is separate and runs under the name Qubic. Qubic's solutions are very non-intuitive and weird so you should explain a bit more about the IoT world first. Minimizing energy consumption is essential for IoT devices. Most devices run on batteries, or on a solar panel, or are otherwise unplugged from main electricity network most of the time. A huge amount of data is being collected, but the bandwidth of the internet is not enough to centrally compute all the data in the cloud. However, it is not either always appropriate for every scattered sensor to have its own units of accounting, which will rarely work when tasks come up every now and then. There is already a huge amount of unused computing power in the world and most of the time, smart devices are lying idle.


It would be most convenient if lightweight sensors could be sprinkled to the desired location and only connected to a local area network (= mesh network). The network would search for nearby devices with sufficient computing power and handle the sensor data processing and forward the processed data. The one who would like to know about the data would pay to the sensor from getting the data and the sensor would pay a share to the data processor. Therefore, computing power could be centralized because the mesh bandwidth is sufficient and the power consumption is low. The LAN can work with LiFi or some other modern data transfer mechanism.


However, here comes the problem of trust again, as the data processor could only fake to calculate data and come up with results and redeem money without spending time or electricity. Thus, a group of several devices is needed, so it can be stated that if a large enough group reaches the same result, the result is more likely to be correct. However, as the size of the group increases, the amount of energy consumed increases, so the size of the group should be proportional to the importance of the information. Qubic is a system developed for this purpose. As with other networks, there are many attack possibilities and vectors, but I ignore them and their mechanisms to counter them at this point because this was supposed to be brief description of Iota’s technology and I will tell you more about those in a separate Qubic blog later.


In the IoT world, every small device is likely to be close to ASIC, because that is the only way to create low-power devices. In other words, you come up with a device, design it, and order a microchip engraved for that device from the factory. Iota Foundation started developing IoT hardware and Qubic is strongly associated with a whole new way of microchip computing for distributed computing and a new way of doing binary computing (that is, with standard ASIC chip components) ternary computing that can reduce the energy consumption of a chip at the expense of increasing its size. Energy is a scarce resource on IoT devices. Therefore, even radical solutions have a role to play if better energy efficiency is achieved.



Let’s repeat a little so that it would be easier to structure these things:


Qubic = Iota Foundation’s Distributed Computing Project.

Qubic network = network and communication method for distributed computing.

QCM = Qubic computing model, A new way to design microchips and a new way to compute on a computer.


So this new method of calculating QCM is ternary computing, ie it uses +, -, and 0. However, the chips are designed so that the voltage is only 0 or + and the components used are common components nowasays already. Two bits are used to encode one trinit bit so data takes up more space. So the chips are not real ternary chips, but only the calculation method is ternary. However, the calculation is done in a new way so often when transistors are needed, only two wires can be crossed and the calculation is done there. More wire is needed (again more space) but less energy is consumed since energy loss occurs mainly in transistors. (Note: I'm not an electrical engineer, but in general this is how it works).


In addition, usually your computer calculates something, caches it, and then counts more. Much of the computer's energy consumption is related to the transfer of data between cache and computing. In the QCM model, nothing is cached, but the calculation proceeds and forks until the correct result is reached. Counting is then a bit like a tree branching and slowly knocking branches until the correct answer is found. In this way, in addition, the hardware and calculation method itself support the division of the calculation into several different devices. So not only that everyone would do the same thing, but a difficult calculation can be split into 10 different groups and each group will advance their own share and form their own consensus. Devices do not have to wait so often for other devices to synchronize, as the calculation itself is designed to divide the question into several different branches and each group determines whether the correct answer is found in their branch.


It is difficult to explain this briefly, and there are pictograms above where the exactness flew into the scrap bin, but the point is that the Qubic model has both a DLT-like computing network called Qubic and a whole new way of computing (= QCM) which includes also hardware model making ternary computing with the binary components.


Thinking about how many people are capable of learning so much new and foreign is certainly a very good question. However, if energy consumption decreases and the life of a battery-powered device, for example, is many times longer, it will provide motivation. And yet, the device chip has to be ordered from the factory so why not order good at the same time.


The Qubic network is at an advanced stage of development and this year (2020) it is expected that we will see the operation of hardware in practice. QCM is largely ready for the first test network, but communication layer (= Qubic network) is being built. While I am writing this Eric Hop have told that he is also writing blog to give us all update where Qubic deveploment is going at the moment more accurately.


TLDR: In late 2020 or maybe more likely early on 2021 should be prepared to wait for Iota's new consensus (consensus with coordicide done) to be on the main network, and the Qubic network will probably be pretty much ready and packaged in that time also. (Unofficial information, = entirely my own estimate.) This is if there are no major setbacks, or new breakthroughs that would slow down or speed up the development. In terms of breakthroughs, it can be said that IF is pretty determined to push current plans to finish and complete these already long projects before starting new ones.


Here were the essential information in “short”. Thank you for your interest and get ready for more detailed blogs bit later when I get more translations done. (There is much more information in finnish language already in this blog).

51 views0 comments
bottom of page