API Archives - ISbyR https://isbyr.com/tag/api/ Infrequent Smarts by Reshetnikov Tue, 24 Jun 2025 15:18:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.4 Add new LLM models to Splunk MLTK https://isbyr.com/add-new-llm-models-to-splunk-mltk/ https://isbyr.com/add-new-llm-models-to-splunk-mltk/#comments Tue, 24 Jun 2025 14:57:26 +0000 https://isbyr.com/?p=1234 Splunk MLTK 5.6.0+ allows you to configure LLM inference endpoints, but the list is somewhat limited. Below, I’ll explain how you can add new LLM models to Splunk MLTK. The Issue You can configure any of the pre-added models in the Splunk UI by going to the MLTK App and then hitting the “Connection Manager” … Continue reading Add new LLM models to Splunk MLTK

The post Add new LLM models to Splunk MLTK appeared first on ISbyR.

]]>
Splunk MLTK 5.6.0+ allows you to configure LLM inference endpoints, but the list is somewhat limited. Below, I’ll explain how you can add new LLM models to Splunk MLTK.

The Issue

You can configure any of the pre-added models in the Splunk UI by going to the MLTK App and then hitting the “Connection Manager” tab.

When you select a service, you can see a list of pre-defined models. These are already somewhat outdated, for example, for Gemin, you don’t have any of the 2.5 models.

So, “how do we add new LLM models to Splunk MLTK?” you might ask.

The Solution

Easy-ish…

A bit of background

This configuration is managed in a Splunk KV Store collection (named mltk_ai_commander_collection), and in essence, it’s a big JSON that has all the providers and the models.

For example, here is the snippet for the Gemini Service and the first of its models

        "Gemini": {
            "Endpoint": {
                "value": "https://generativelanguage.googleapis.com/v1beta/models",
                "type": "string",
                "required": false,
                "description": "The API endpoint for sending chat completion requests to Google's Gemini language model."
            },
            "Access Token": {
                "value": "",
                "type": "string",
                "required": true,
                "hidden": true,
                "description": "The authentication token required to access the Gemini API."
            },
            "Request Timeout": {
                "value": 200,
                "type": "int",
                "required": false,
                "description": "The maximum duration (in seconds) before a request to the Gemini API times out."
            },
            "is_saved": {
                "value": true,
                "type": "boolean",
                "required": false,
                "description": "Is Provider details stored"
            },
            "models": {
                "gemini-pro": {
                    "Response Variability": {
                        "value": 0,
                        "type": "int",
                        "required": true,
                        "description": "Adjusts the response's randomness, impacting how varied or deterministic responses are."
                    },
                    "Maximum Result Rows": {
                        "value": 10,
                        "type": "int",
                        "required": false,
                        "description": "The maximum number of result entries to retrieve in a response."
                    },
                    "Max Tokens": {
                        "value": 2000,
                        "type": "int",
                        "required": false,
                        "description": "The limit on the number of tokens that can be generated in a response."
                    },
                    "Set as default": {
                        "value": false,
                        "type": "boolean",
                        "required": false
                    }
                },

So if we want to add a new model, all we need to do is add another element to the models array.

While there is a Loolup Editor app, it will only help you (to edit KV store collections) if there is a lookup configured for it. Which is not the case for the mltk_ai_commander_collection one.

High-level steps

Another way (and the one we will take) is to use Splunk REST API, and at a high level, it consists of the following steps:

  1. Get the current configuration (and the _key of the collection item) in a JSON format
  2. Update in a text editor the JSON payload
  3. Update the KV collection with the new JSON

Detailed steps

I will provide examples using Postman, but you can use curl or any other method of your choice for interacting with the REST API.

Get the current configuration

Run a GET call to the collection/data endpoint

The actual URL is https://localhost:8089/servicesNS/nobody/Splunk_ML_Toolkit/storage/collections/data/mltk_ai_commander_collection

Copy the results and take a note of the _key at the end of the JSON.

Update the JSON

Paste the JSON in a text editor of your choice.

Go to the Provider for which you want to add a new Model (Gemini) in our case,

Duplicate the model object inside the Service object and change the model name.

For example, here I copied/pasted the gemini-2.0-flash to the end of the Gemini service object and renamed it to be gemini-2.0-flash.

NOTE: You must ensure that the model name you provide here is exactly the same as it would appear when calling the inference API for the LLM Service.

For example, for Gemini

Update the KV collection

Now we need to update the collection with the updated JSON payload.

Send a POST request to the collection/data endpoint

  • replace the _key part of the URL with the value that you have in your JSON
  • remove the square brackets ([]) that surround the JSON

The actual URL is something like that: https://localhost:8089/servicesNS/nobody/Splunk_ML_Toolkit/storage/collections/data/mltk_ai_commander_collection/68540d2d0d2a214efd0d3b61.

Now, refresh the Connection Management page and enjoy a fresh new model at your disposal

Simply use the new model in the | ai command

And here is a sneak peek into an LLM Telemetry dashboard I’m working on

I hope that helped you to understand how to add new LLM models to Splunk MLTK.

The post Add new LLM models to Splunk MLTK appeared first on ISbyR.

]]>
https://isbyr.com/add-new-llm-models-to-splunk-mltk/feed/ 2
Discovering the T2.social API https://isbyr.com/discovering-the-t2-social-api/ https://isbyr.com/discovering-the-t2-social-api/#respond Sat, 08 Jul 2023 00:14:19 +0000 https://isbyr.com/?p=1053 So I’ve joined T2 (now Pebble) to try it out and it was pretty quiet there at the beginning. It was a bit hard to see whom to follow and such. So I decided to look a bit behind the curtain and see if T2 Social has an API. By the way, if you need … Continue reading Discovering the T2.social API

The post Discovering the T2.social API appeared first on ISbyR.

]]>
So I’ve joined T2 (now Pebble) to try it out and it was pretty quiet there at the beginning. It was a bit hard to see whom to follow and such. So I decided to look a bit behind the curtain and see if T2 Social has an API.

By the way, if you need an invite reach out to me either via comments here or on Twitter @IlyaReshet.

Is there a T2 official API?

While there is no, official and documented API (at least at the time of writing which is the beginning of June 2023) that I could find I had an idea to look at the Network tab in the Chrome Developer Console

Quest after the T2 API using Chrome and Postman

When you go to my T2 profile with Dev Console open you can see a lot of requests going on the network.

Network tab in Chrome Developer Tools

That’s all nice, but how does it lead me to any API?

I noticed that some of the calls are called “query”

And when you look into the payload section you can see that it running some kind of query (dah!) against the fetchUser operation.

Then in the response, you can see what this operation returned to the browser.

T2 API call reply
T2 API call reply

Now we are cooking with Gas! We have user details, like id, handle, bio, location, number of users following this user and how many other users this user is following, etc.

But what is this “horrible” query with all these new lines (\n) and how can I use it in Postman or somewhere else

unformatted T2 API call
unformatted T2 API call

Apparently, it’s a GraphQL syntax and after replacing all the \n with new lines and (in Postman) moving the variables part out it looks a lot more readable

T2 API query in Postman
T2 API query in Postman

Here is the query nicely formatted


query fetchUserProfile($handle: String!, $from: Int, $limit: Int) {
  user(handle: $handle) {
    id
    is_profile_completed
    settings
    ...userFullFragment
    invite {
      id
      hashtag
      invite_type
      user {
        handle
        __typename
      }
      __typename
    }
    tweets(from: $from, limit: $limit) {
      is_thread
      ...tweetFragment
    __typename
    }
    replies(from: $from, limit: $limit) {
      is_thread
      ...tweetFragment
      parent {
        user {
          handle
          __typename
        }
        __typename
      }
      reposting {
        ...tweetFragment
        __typename
      }
      replies {
        ...tweetFragment
        __typename
      }
      __typename
    }
    __typename
  }
}

fragment tweetFragment on Tweet {
  id
  reply_to_id
  is_reposted
  is_liked
  is_reported
  replies_count
  reposts_count
  favorites_count
  reports_count
  created_at
  is_edited
  deleted_at
   block_reason   
  __typename
}

fragment userFullFragment on User {
  id
  handle
  name
  bio
  location
  website
  is_followed
  follows_you
  is_verified
  is_twitter_legacy
  verified_note
  created_at
  followers_count
  followings_count
  twitter_handle
  block_reason
  __typename
}

After reading a bit about GraphQL I was able to decipher what all that means:

  • query fetchUserProfile($handle: String!, $from: Int, $limit: Int) – here we are running a query (the name fetchUserProfile is just used for convenience and can be replaced with foobar or omitted at all) and telling which arguments (variables on the Postman screenshot) we want to pass to the operation.
  • user(handle: $handle) – we want to return a user whos handle field is equal to the value passed via the $handle variable
  • then we declare all the fields or classes we want the API to return.
    • * some of these are “simple” fields like id
    • * while others are more complicated like ...userFullFragmentfragments
      • * and even totally separated classes (that would require separate traditional REST API calls) can be fetched in the same query in GraphQL, like the tweets portion in the example above.

The T2 Social API

It’s a bit hard to document the T2 API schema without access , but I’ll try to add information as I continue to discover it.

The endpoint for T2 Social API: https://t2.social/api/query

The post Discovering the T2.social API appeared first on ISbyR.

]]>
https://isbyr.com/discovering-the-t2-social-api/feed/ 0