Use ChatGPT to generate sample monitoring data

I wanted to get some sample data and was too lazy to use generators or to craft it by hand, so I decided to try and use ChatGPT to generate sample monitoring data.

Started with this prompt

act as an application and infracture monitoring platform synthetic data generator. All you responses need to be in a valid JSON format.
Generate CPU performance metrics for 5 servers over last 24 hours

The result was actually OK

However, it provided the samples grouped by a server, which is the opposite of what I wanted. I wanted to see a timestamp of the sample and then the samples from all the servers.

Prompt #2

rearrange it so that it will be grouped by timestamp first and the server

It worked!

But I’ve noticed that in both times it gave me only measurements over the first 5 hours.

So here goes prompt #3

recreate it to have at least 24 unique timestamps

And voila. It works! I was able to use ChatGPT to generate sample monitoring data in 2 minutes (it would have been quicker, but I had to click “continue generating” a few times as I think it was reaching the limit of the token size.)

Let’s see, what if I want also to have some samples for Memory utilisation, prompt #4

repeat the same for memory metrics

Good ChatGPT!

Now l want to combine both (CPU and Memory) metrics to appear together. Prompt # (aaa forgot what number it is, probably I’ve reached my own token limit 🙂 )

combine both CPU and Memory metrics 

Amazing!!!!

Discovering the T2.social API

So I’ve joined T2 to try it out and it was pretty quiet there at the beginning. It was a bit hard to see whom to follow and such. So I decided to look a bit behind the curtain and see if T2 Social has an API.

By the way if you need an invite reach out to me either via comments here or on Twitter @IlyaReshet.

Is there a T2 official API?

While there is no, official and documented API (at least at the time of writing which is the beginning of June 2023) that I could find I had an idea to look at the Network tab in the Chrome Developer Console

Quest after the T2 API using Chrome and Postman

When you go to my T2 profile with Dev Console open you can see a lot of requests going on the network.

Network tab in Chrome Developer Tools

That’s all nice, but how does it lead me to any API?

I noticed that some of the calls are called “query”

And when you look into the payload section you can see that it running some kind of query (dah!) against the fetchUser operation.

Then in the response, you can see what this operation returned to the browser.

T2 API call reply
T2 API call reply

Now we are cooking with Gas! We have user details, like id, handle, bio, location, number of users following this user and how many other users this user is following, etc.

But what is this “horrible” query with all these new lines (\n) and how can I use it in Postman or somewhere else

unformatted T2 API call
unformatted T2 API call

Apparently, it’s a GraphQL syntax and after replacing all the \n with new lines and (in Postman) moving the variables part out it looks a lot more readable

T2 API query in Postman
T2 API query in Postman

Here is the query nicely formatted


query fetchUserProfile($handle: String!, $from: Int, $limit: Int) {
  user(handle: $handle) {
    id
    is_profile_completed
    settings
    ...userFullFragment
    invite {
      id
      hashtag
      invite_type
      user {
        handle
        __typename
      }
      __typename
    }
    tweets(from: $from, limit: $limit) {
      is_thread
      ...tweetFragment
    __typename
    }
    replies(from: $from, limit: $limit) {
      is_thread
      ...tweetFragment
      parent {
        user {
          handle
          __typename
        }
        __typename
      }
      reposting {
        ...tweetFragment
        __typename
      }
      replies {
        ...tweetFragment
        __typename
      }
      __typename
    }
    __typename
  }
}

fragment tweetFragment on Tweet {
  id
  reply_to_id
  is_reposted
  is_liked
  is_reported
  replies_count
  reposts_count
  favorites_count
  reports_count
  created_at
  is_edited
  deleted_at
   block_reason   
  __typename
}

fragment userFullFragment on User {
  id
  handle
  name
  bio
  location
  website
  is_followed
  follows_you
  is_verified
  is_twitter_legacy
  verified_note
  created_at
  followers_count
  followings_count
  twitter_handle
  block_reason
  __typename
}

After reading a bit about GraphQL I was able to decipher what all that means:

  • query fetchUserProfile($handle: String!, $from: Int, $limit: Int) – here we are running a query (the name fetchUserProfile is just used for convenience and can be replaced with foobar or omitted at all) and telling which arguments (variables on the Postman screenshot) we want to pass to the operation.
  • user(handle: $handle) – we want to return a user whos handle field is equal to the value passed via the $handle variable
  • then we declare all the fields or classes we want the API to return.
    • * some of these are “simple” fields like id
    • * while others are more complicated like ...userFullFragmentfragments
      • * and even totally separated classes (that would require separate traditional REST API calls) can be fetched in the same query in GraphQL, like the tweets portion in the example above.

The T2 Social API

It’s a bit hard to document the T2 API schema without access , but I’ll try to add information as I continue to discover it.

The endpoint for T2 Social API: https://t2.social/api/query

Predicting multiple metrics in Splunk

Splunk has a predict command that can be used to predict a future value of a metric based on historical values. This is not a Machine Learning or an Artificial Intelligence functionality, but a plain-old-statistical analysis.

So if we have a single metric, based on historical results we can produce a nice prediction for the future (of definable span), but predicting multiple metrics in Splunk might not be as straightforward.

Continue reading Predicting multiple metrics in Splunk

How to collect StatsD metrics from rippled server using Splunk

The XRP Ledger (XRPL) is a decentralized, public blockchain and rippled server software (rippled in future references) powers the blockchain. rippled follows the peer-to-peer network, processes transactions, and maintains some ledger history.

rippled is capable of sending its telemetry data using StatsD protocol to 3rd party systems like Splunk.

Continue reading How to collect StatsD metrics from rippled server using Splunk

How to use an SSH key stored in Azure Key Vault while building Azure Linux VMs using Terraform

So I want to use the same SSH Public key to be able to authenticate across multiple Linux VMs that I’m building in Azure in Terraform. While I did find a lot of examples (including among Terraform example repo) of how to do it if you have the key stored on your local machine I couldn’t find (or didn’t search long enough) how to use an SSH key stored in Azure Key Vault while building Azure Linux VMs using Terraform.

Continue reading How to use an SSH key stored in Azure Key Vault while building Azure Linux VMs using Terraform

Choosing a Cloud Provider for a Bootstrapped StartUp

There are many different options for Funded start-ps to get free credits from various cloud providers, but choosing a cloud provider for a Bootstrapped startup is a bit harder.

Some might have already a preference towards one cloud provider over another (based on their experience or other factors), but here I’m trying to compare them from a pure “free cloud provider credits for a bootstrapped startup” perspective,

Continue reading Choosing a Cloud Provider for a Bootstrapped StartUp

Infrequent Smarts by Reshetnikov