Hivemind dev environment setup guide (according to my findings, it may or may not be the best way)

image.png

Hello !

As some of you may know, I am working on hivemind (wohoo communities v2 coming soon:tm:)
Setting up a dev environment can be a bit tricky so I figured I'd share a bit of the knowledge I got while doing it.

For the record it's what I found, it may not be the best way to do it, if you know ways to improve it please comment below :)

Postgresql

First we need a postgres database, I think docker is best here because it allows us to have exactly the version we want, and not struggle installing it.

you will need to install docker and docker compose.

one you have that create a docker-compose.yml file with this content

version: '3'
services:
    dbpostgres:
        image: postgres:10
        container_name: hivemind-postgres-dev-container
        ports:
            - "5532:5432"
        environment:
            - POSTGRES_DB=hive
            - POSTGRES_PASSWORD=root

Then in the directory run
docker-compose up -d

then connect to it using your favorite tool (I use datagrip) and install the intarray extension: (you could improve this with a post install script with the dockerfile but I didnt' have the motivation to do it for now)

CREATE EXTENSION IF NOT EXISTS intarray;

Hive setup:

(if hive got another update check https://gtg.openhive.network/get/bin/)
(it's better to build hived yourself but we can assume @gtg is trustworthy)
wget https://gtg.openhive.network/get/bin/hived-v1.25.0
mv hived-v1.25.0 hived && chmod +x hived
mkdir data

execute hived for two seconds to create the directory structure

./hived -d data
then exit and delete the blockchain files
rm -rf ./data/blockchain/*
then we'll download the first 5 million block_log from gtg and put it on the right directory
wget https://gtg.openhive.network/get/blockchain/block_log.5M -P data/blockchain/ && mv data/blockchain/block_log.5M block_log

then update the hived config.ini:

nano data/config.ini

log-appender = {"appender":"stderr","stream":"std_error"}
log-logger = {"name":"default","level":"info","appender":"stderr"}
backtrace = yes
plugin = webserver p2p json_rpc
plugin = database_api
# condenser_api enabled per abw request
plugin = condenser_api
plugin = block_api
# gandalf enabled witness + rc
plugin = witness
plugin = rc

# market_history enabled per abw request
plugin = market_history
plugin = market_history_api

plugin = account_history_rocksdb
plugin = account_history_api

# gandalf enabled transaction status
plugin = transaction_status
plugin = transaction_status_api

# gandalf enabled account by key
plugin = account_by_key
plugin = account_by_key_api

# and few apis
plugin = block_api network_broadcast_api rc_api

history-disable-pruning = 1
account-history-rocksdb-path = "blockchain/account-history-rocksdb-storage"

# shared-file-dir = "/run/hive"
shared-file-size = 20G
shared-file-full-threshold = 9500
shared-file-scale-rate = 1000

flush-state-interval = 0

market-history-bucket-size = [15,60,300,3600,86400]
market-history-buckets-per-size = 5760

p2p-endpoint = 0.0.0.0:2001
p2p-seed-node =
# gtg.openhive.network:2001

transaction-status-block-depth = 64000
transaction-status-track-after-block = 42000000

webserver-http-endpoint = 0.0.0.0:8091
webserver-ws-endpoint = 0.0.0.0:8090

webserver-thread-pool-size = 8

finally replay your node, and stop at 5 million blocks

./hived --replay-blockchain --stop-replay-at-block 5000000 -d data
this will cook for 10-20 minutes depending on your hardware

Hivemind setup

First, clone hivemind
git clone git@gitlab.syncad.com:hive/hivemind.git
then switch to the develop branch (usually better for...developping)
git checkout develop
and then install the dependencies
python3 -m pip install --no-cache-dir --verbose --user -e .[dev] 2>&1 | tee pip_install.log

Then if your hived node is done replaying, you can do your first sync:

./hive/cli.py --database-url postgresql://postgres:root@localhost:5532/hive --test-max-block=4999998 --steemd-url='{"default":"http://localhost:8091"}'

this process will take quite a bit ( ~20 minutes or more depending on your hardware)

Then what I like to do is dump the database so I can get back to this state easily without having to resync everything:

PGPASSWORD=root pg_dump -h localhost -p 5532 -U postgres -d hive -Fc -f dump.dump

and if I want to restore:

PGPASSWORD=root pg_restore -h localhost -p 5532 -U postgres -d hive dump.dump -j12 --clean

Finally if you want to test some specific applications, mocks are your friends ! look into the mock_data folder for examples.

In order to do a sync and add the mock data you can do this:

./cli.py --database-url postgresql://postgres:root@localhost:5532/hive --test-max-block=4999998 --steemd-url='{"default":"http://localhost:8091"}' --mock-block-data-path /home/howo/projects/hivemind/mock_data/block_data/community_op/mock_block_data_community_test.json (replace the path with whatever mock file you have obviously)

Slight note on --test-max-block, it needs to be the height of the highest block of your mocks + 2, because hivemind trails the real blockchain by two blocks, so if you set --test-max-block as 2 and your mocks end at block 2, they won't be picked up.

H2
H3
H4
3 columns
2 columns
1 column
20 Comments
Ecency