Do You Want To See Statistics Of Your Favorite HIVE Token?

Hello Hivers, Hello token traders, Hello others,

in this post I want to show you, how I have set up the monitoring of tokens and how I can enhance this monitoring for an arbitrary HIVE-subtoken.

Content:

  • Summary of setting up ELK on Raspberry Pi
  • Explaining some data fields
  • Enhancing the monitoring for other token
  • My weekly or monthly service

grafik.png
(Example of $CHARY Dashboard)


Summary of setting up ELK on Raspberry Pi

I have installed Elasticsearch (database) via docker and Kibana (frontend) via normal setup on my Raspberry Pi.
The challenge was Logstash, a tool that is used to collect and prepare data. It doesn't run in a new version on the ARM-based Raspberry Pi.
So I created a linux bash script, which collects the data from its origin and turns it into a json format, which can be read by Elasticsearch.
Now, the data from $BEER, $POB, $LIST and $CHARY is read daily and put into my Elasticsearch database automatecally.

You can see the details here.


Explaining of some data fields

The origin of the data is https://api.hive-engine.com/rpc/contracts
When you try to open it via browser, you get an error, but when you open it in a shell with this comand:

curl -XPOST -H "Content-type: application/json" -d '{ "jsonrpc": "2.0", "method": "find", "params": { "contract": "market", "table": "tradesHistory", "query": { "symbol": "CHARY"}, "limit":1000, "offset": 0 }, "id": 1 }' 'https://api.hive-engine.com/rpc/contracts'

then you get a result like this:

{"jsonrpc":"2.0","id":1,"result":[{"_id":1025553,"type":"sell","buyer":"achimmertens","seller":"filotasriza3","symbol":"CHARY","quantity":"19.702","price":"0.02000000","timestamp":1620200553,"volume":"0.39404000","buyTxId":"afa1288767a88ceff32031fa7008f740bc93de2f","sellTxId":"e519ba622283fec26b7e1b6dee514efb82244d0d"},{"_id":1026344,"type":"sell","buyer":"achimmertens","seller":"elkezaksek","symbol":"CHARY","quantity":"711.122","price":"0.02000000","timestamp":1620227892,"volume":"14.22244000","buyTxId":"afa1288767a88ceff32031fa7008f740bc93de2f","sellTxId":"8c9d7d646c48d6e92bb5735a44a25c7dc698a30f"},
...
{"_id":1027082,"type":"sell","buyer":"achimmertens","seller":"jjprac","symbol":"CHARY","quantity":"32.433","price":"0.02000000","timestamp":1620242577,"volume":"0.64866000","buyTxId":"afa1288767a88ceff32031fa7008f740bc93de2f","sellTxId":"830110ae3a51fdc6ae111c78849a5c8434be0670"}]}

My magic script turns it into the following format:

{"index": {"_index":"chary","_id":"1025553"}} {"id":1025553,"type":"sell","buyer":"achimmertens","seller":"filotasriza3","symbol":"CHARY","quantity":"19.702","price":"0.02000000","timestamp":1620200553,"volume":"0.39404000","buyTxId":"afa1288767a88ceff32031fa7008f740bc93de2f","sellTxId":"e519ba622283fec26b7e1b6dee514efb82244d0d"}

One can see the fields and its content. This means:
On 1620200553 (Unix time format = GMT: Wednesday, 5. May 2021 07:42:33), the seller "filotaria3" sold 19,702 $CHARY to the buyer "achimmertens", who paid 0,394 $HIVE for it. This is a price of 0.02 $HIVE/$CHARY.

These buy events are now collected in Elasticsearch and in Kibana it looks like this:

grafik.png

Now I am able to interprete these data in several ways.
For example like this:

grafik.png
The inside of the circle shows the Buyers of $CHARY, ordered by $HIVE they have spent. The outside shows the recipients of that $HIVE (Sellers of $CHARY)

You can see further examples, as I already mentioned, here: $BEER, $POB, $LIST and $CHARY.


Enhancing the Monitoring for other Token

In this chapter I go more into details. It is a documentation for you, but mainly for me. You can skip it if you want :-)

  • Create directories and copy files to it:

pi@raspberrypi:~ $ cd elk
pi@raspberrypi:~/elk $ mkdir pob
pi@raspberrypi:~/elk $ mkdir pob/log
pi@raspberrypi:~/elk $ cp beer/beercurl_json.sh pob/pobcurl_json.sh
pi@raspberrypi:~/elk $ cd pob
pi@raspberrypi:~/elk/pob $ ll
insgesamt 16
drwxr-xr-x 3 pi pi 4096 Apr 30 07:00 .
drwxr-xr-x 6 pi pi 4096 Apr 30 07:00 ..
drwxr-xr-x 2 pi pi 4096 Apr 30 07:00 log
-rwxr--r-- 1 pi pi 2269 Apr 30 07:00 pobcurl_json.sh

  • Exchange everywhere (4 times) the tokenname:

pi@raspberrypi:~/elk/pob $ vi pobcurl_json.sh

  • Insert the mapping into kibana:

PUT pob
{
"aliases" : { },
"mappings" : {
"properties" : {
"buyTxId" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"buyer" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"id" : {
"type" : "long"
},
"price" : {
"type" : "float"
},
"quantity" : {
"type" : "float"
},
"sellTxId" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"seller" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"symbol" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
} },
"timestamp" : {
"type" : "date",
"format" : "epoch_second"
},
"type" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"volume" : {
"type" : "float"
}
}
}
}

  • Test it:

pi@raspberrypi:~/elk/pob $ ./pobcurl_json.sh

The result should look like this:

Token = pob
DATE = 2021-04-30
LOGPATH = /home/pi/elk/pob/log
LOG = /home/pi/elk/pob/log/pobcurl.log
LOG1 = /home/pi/elk/pob/log/pobcurl1.log
LOG2 = /home/pi/elk/pob/log/pobcurl2.log
LOG3 = /home/pi/elk/pob/log/pobcurl3.log
LOGDATE = /home/pi/elk/pob/log/pobcurl_2021-04-30.log
LOGCONS = /home/pi/elk/pob/log/pobcurlcons.log
INDEXLOG = /home/pi/elk/pob/log/pob_ids.log
INDEXLOG2 = /home/pi/elk/pob/log/pob_ids2.log
INDEXLOG3 = /home/pi/elk/pob/log/pob_ids3.log
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 42346 100 42181 100 165 113k 454 --:--:-- --:--:-- --:--:-- 113k
{"took":611,"errors":false,"items":[{"index":{"_index":"pob","_type":"_doc","_id":"1006038","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":0,"_primary_term":1,"status":201}},{"index":{"_index":"pob","_type":"_doc","_id":"1006051","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":1,"_primary_term":1,"status":201}},{…

  • Create Index pattern in kibana and check if the data can be seen in Kibana.
  • Add the script to crontab:

crontab -e
#m h dom mon dow command
50 4 * * * /home/pi/chary/charycurl.sh >> /home/pi/chary/cron.log
51 4 * * * /home/pi/elk/beer/beercurl_json.sh >> /home/pi/elk/beer/log/cron.log
52 4 * * * /home/pi/elk/chary/charycurl_json.sh >> /home/pi/elk/chary/log/cron.log
53 4 * * * /home/pi/elk/pob/pobcurl_json.sh >> /home/pi/elk/pob/log/cron.log

Create and change the Visualizations:

One can copy and paste all the visualizations in kibana. This saves much time, but is a bit tricky.

  • At first we need the IDs of the index patterns. For this we go to Kibana/Dev Tools and type in:

GET .kibana/_search?q=type:index-pattern
{}

Here are the results:


{
"_index" : ".kibana_1",
"_type" : "_doc",
"_id" : "index-pattern:a8c89ec0-a975-11eb-9d18-99c16040a3f4",
"_score" : 5.0162334,
"_source" : {
"index-pattern" : {
"title" : "list",

{
"_index" : ".kibana_1",
"_type" : "_doc",
"_id" : "index-pattern:251023c0-a973-11eb-9d18-99c16040a3f4",
"_score" : 4.9639482,
"_source" : {
"index-pattern" : {
"title" : "pob
",

{
"_index" : ".kibana_1",
"_type" : "_doc",
"_id" : "index-pattern:62fff5f0-a9a2-11eb-9d18-99c16040a3f4",
"_score" : 5.0162334,
"_source" : {
"index-pattern" : {
"title" : "chary",

{
"_index" : ".kibana_1",
"_type" : "_doc",
"_id" : "index-pattern:db83d9b0-a722-11eb-9d18-99c16040a3f4",
"_score" : 5.355957,
"_source" : { "index-pattern" : {
"title" : "beer
",

grafik.png

Create new Views:

  • Copy the existing view into a file via Management/saved objects and downloading the ndjson file for the corresponding views.

grafik.png

  • Open the ndjson file with an editor (like notepad++).
  • Delete the ID for the Visualition:

,"title":"Commulated Amount Of Sold $CHARY Per Person"}"},"id":"","migrationVersion":{"visualization":"7.7.0"},"references":

• Exchange the Reference ID to the Index-Patterns:
Set the index pattern ID for your new token at "references" (In this case "251023c0-a973-11eb-9d18-99c16040a3f4" for the $POB token):

"migrationVersion":{"visualization":"7.7.0"},"references":[{"id":"251023c0-a973-11eb-9d18-99c16040a3f4","name":"kibanaSavedObjectMeta.searchSourceJSON.index","type":"index-pattern"}]

grafik.png
Repeat this for every row.

• Exchange the Token Name everywhere

grafik.png

  • Save and import the ndjson file

grafik.png

Now these visualizations appear under "Saved Object" and can be used in the dashboards.
The dashboard can be copied and pasted in a similar way, but it is easier for me to create a new one and just add the views.


My weekly or monthly service

So setting up a new token takes about one hour work for me (or less if I get more routine).
To create a statistic post I can reuse texttemplates and exchange the tokenname.
I have to take the 6 screenshots and paste it. The two tables are done via taking the raw data format and exchanging some characters in notepad++.
All in all a regular stats posts takes about 15 minutes.
I can offer to upload 5-6 stats per week (mostly fridays). At the moment it is $BEER, $POB and $LIST weekly and $CHARY monthly.
I want to do this as long I have fun with it and see your votes and interests. If someone has an idea to automate it, please tell me.

If you see enhancements, hints, errors, ... please tell me also.
If you want other token monitored, please tell me as well.

Regards, Achim

H2
H3
H4
3 columns
2 columns
1 column
6 Comments
Ecency