Programming - Implementing a Blockchain using Python and Flask [Dev Blog 9]

[Image 1]


Hey it's a me again drifter1!

Well, well. I made quite the changes and I chose to skip making a Dev Blog in the weekend. So, there's a lot to talk about!

Basically, most of it is code revamping and cleanup, by creating new scripts to split bigger scripts. But, there are also important changes like relaying transactions and blocks to other known nodes, and synchronizing with the network. The Blockchain Info structure is also in use now, and has a new field where the rich list is stored. A lot of multi-threading is also implemented in order to make the heavier requests respond quicker. And talking of requests, requests now follow a specific format and the endpoints also return status codes (200 = success, 400 = failure). Let's also not forget to mention that there's a new endpoint for retrieving the last block in the chain (/blocks/last/)...

So, without further ado, let's get more in-depth!

GitHub Repository

Validation Code Revamping and Cleanup

The various block validation and transaction validation checks now are in new scripts called "" and "" respectively.

Thus, inside of the endpoint callback for transaction posting, the following section is now responsible for validating a transaction:

# check JSON format
if json_transaction_is_valid(json_transaction):
    # check transaction inputs
    if not check_transaction_inputs(settings, json_transaction["inputs"]):
        return {}, 400
# recalculate and check the transaction input and output values and fee balancing if not recalculate_and_check_transaction_values(json_transaction): return {}, 400
# recalculate and check hash if not recalculate_and_check_transaction_hash(json_transaction): return {}, 400
# check input signatures if not verify_input_signatures(json_transaction["inputs"]): return {}, 400
else: return {}, 400

Speaking of transactions, the structure for transactions now no longer has the total_input field and the total_output field has been renamed to value.

Now for blocks, blocks now have new field called transaction_count, which might be useful later on when we get into headers to make the request and response bodies smaller. And, similar to transactions, the code for validating a block, is now also faaaaaaar smaller:

 # check JSON format
if json_block_is_valid(json_block):
# check previous block hash and height if not check_previous_block(settings, json_block): return {}, 400
# recalculate and check hash if not recalculate_and_check_block_hash(json_block): return {}, 400
# check transactions if not check_block_transactions(settings, json_block["transactions"]): return {}, 400
else: return {}, 400

Due to these functions and the relay thread, the code of this endpoint callback has seen the most dramatic impact in size!

Request Code Revamping and Cleanup

Requests now follow a specific format defined in the common/ script. And a new request timeout has also been specified, with a default value of 5 seconds.

Therefore, a general GET request now simply puts the corresponding parameters into the following function:

def general_get_request(settings: Node_Settings, target_node: dict, url_path: str, json_data: dict) -> tuple[dict, int]:
    endpoint_url = "http://" + str(json_destruct_node(target_node)) + url_path
try: response = requests.get(url=endpoint_url, json=json_data, timeout=settings.request_timeout) except: return {}, status.HTTP_400_BAD_REQUEST
return response.json(), response.status_code

For a local request, the target_node is simply the settings.json_node field, which means that a local GET request is defined as:

def local_get_request(settings: Node_Settings, url_path: str, json_data: dict):
return general_get_request(settings, settings.json_node, url_path, json_data)

For example, in order to make a request to the local endpoint for retrieving the inputs of a transaction in a given block, local_retrieve_block_transaction_inputs() is defined as:

def local_retrieve_block_transaction_inputs(settings: Node_Settings, bid: int, tid: int):
    url_path = "/blocks/" + str(bid) + "/transactions/" + str(tid) + "/inputs/"
    return local_get_request(settings, url_path, {})

Blockchain Info and Rich List

As mentioned earlier, the blockchain info structure is now correctly being updated. This is done after block creation, in the create_block_relay() thread, where all the corresponding fields are updated.

Contacting, the corresponding endpoint a little bit after the block creation response, yields results such as:

    "name": "test_blockchain",
    "height": 1,
    "total_transactions": 3,
    "total_addresses": 3,
    "block_time": 300,
    "block_reward": 1.5,
    "rich_list": [
        "address": "0xe485548ff11dbf8c884e18e9c1e53e729dbe66e2",
        "balance": 3.499
        "address": "0x60a192daca0804e113d6e6d41852c611be5de0bf",
        "balance": 1.2
        "address": "0xe23fc383c7007002ef08be610cef95a86ce44e26",
        "balance": 0.3
which will be useful when we get into an explorer later on!

The code for that purpose is still in an early unoptimized stage, but you can check it out on GitHub.

Network Relaying Transactions and Blocks

Up to this point, one had to post a transaction or block to all full nodes in the network in order for all full nodes to have the same information. Well, this is now no longer necessary, as a new script takes care of sending valid blocks and transactions to all known nodes, as a relay thread after the corresponding callbacks have finished their main function.

The general format of such an relay is:

def general_network_relay(settings: Node_Settings, function: exec, json_data: dict):
    json_nodes, status_code = local_retrieve_nodes(settings)
if status_code != status.HTTP_200_OK: return
for json_node in json_nodes: try: _thread.start_new_thread( function, (settings, json_node, json_data)) except: pass

Basically, call a specific function using different threads, for all known nodes.

Network relaying a new block is thus simply:

def create_block_network_relay(settings: Node_Settings, json_block: dict):
    general_network_relay(settings, general_create_block, json_block)
which is also the first function that is called in the create_block_relay().

Network Synchronization and Example Run

When a new full node enters the blockchain system, or a node has been off-line for some time, it now first tries contacting its known nodes to see if it's "behind" in the chain. This is basically a simple blockchain info retrieval request on some random known node. If the local height of the chain is smaller then the response's, then the missing blocks are retrieved and added to the local copy. For the unconfirmed transactions, the structure is completely wiped, and all transactions of a given random known node are retrieved to replace it. Later on, this check might happen in a more regular basis, but we shall see...

So, for example, let's say, we run a dns_server, full_node and call the testing script to add some information to the chain. Starting a new full_node after that with directory "../.full_node2/", this node, will contact the DNS server and retrieve the other full_node. Retrieving the information from that node, it will see that it's not up-to-date and so it will retrieve the two missing blocks (block0 and block1). I also managed to call the second node somewhere before the second block was added, and the synchronization was still successful.

In order to try this out, you have to be in the src directory and then call the scripts in the following order, in different terminals:

# wait for some time (maybe even for the testing thread to finish posting everything)
python -d ../.full_node2

Checking the local files you will then see the following:

which means that both nodes have the same exact information!





The rest is screenshots or made using

Previous dev blogs of the series

Final words | Next up

And this is actually it for today's post!

I will keep you posted on my journey! Expect such articles to come out on a weekly or montly basis, even after the main project is finished!

Keep on drifting!

3 columns
2 columns
1 column