GCP Archives - ISbyR https://isbyr.com/tag/gcp/ Infrequent Smarts by Reshetnikov Fri, 06 Feb 2026 12:09:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.4 Splunk O11y Deployment https://isbyr.com/splunk-o11y-deployment/ https://isbyr.com/splunk-o11y-deployment/#respond Wed, 15 Oct 2025 13:47:13 +0000 https://isbyr.com/?p=1263 I have a little project I’m working on playing with, MentionVault.com. It’s a platform that allows you to look for guests on various podcasts and what was mentioned in each episode. So I was thinking, I can’t be that shoeless cobbler, how come I have an application and don’t have any Observability for it?! That’s … Continue reading Splunk O11y Deployment

The post Splunk O11y Deployment appeared first on ISbyR.

]]>
I have a little project I’m working on playing with, MentionVault.com. It’s a platform that allows you to look for guests on various podcasts and what was mentioned in each episode. So I was thinking, I can’t be that shoeless cobbler, how come I have an application and don’t have any Observability for it?! That’s how I decided to try a Splunk O11y deployment for my app.

MentionVault’s Architecture

MentionVault High Level Achitecture

The front end of my app (the website) is a nextJS running on Vercel, the database is Supabase, batch (Python jobs) that populate the database are GCP cloud functions, and in one of them I’m using Google Vertex AI (for extractions of the mentions from the episodes metadata)…. hey look, I start to look like a proper enterprise with stuff deployed all over he place!

Observability Overview

Splunk O11y terminology is somewhat confusing, so here is what we will be deploying for each component:

Application ComponentSplunk ComponentMethod
GCP Run Functions executionsSplunk Infrastructure MonitoringGCP Infrastructure
Digital ExperienceSplunk O11y Real User Monitoring (RUM)splunk/otel-web node package
NextJS FrontendSplunk O11y Application Performance Monitoring (APM)splunk/otel node package
vercel/otel node package
GCP Run Functions instrumentationSplunk O11y Application Performance Monitoring (APM)splunk/otel python package
GCP SchedulerTBC

I tried to stick to the default Splunk O11y Open Telemetry (OTEL) packages, but as you will see, that didn’t always work (for my use case).

First things first, get your hands on a 14-day Splunk O11y trial at https://www.splunk.com/en_us/download/o11y-cloud-free-trial.html

Once you log in…. and it’s a blank canvas (see note below), so let’s start painting.

Note: Don’t be alarmed if at the start (before you bring any data), the UI looks very bare and you kind of think to yourself, “where is all the shiny stuff?”. It’s by intention, the approach that Splunk O11y team took: “We will start showing you widgets once we have the data to power them!”.

GCP Infrastructure

In a nutshell, Splunk O11y will pull all the metrics from the GCP Monitoring API. To configure it, start the wizard from the UI by navigating to Data Management > Available Integrations > (search for “gcp”) > Google Cloud Platform.

Splunk O11y Infrastructure Monitoring Available Integrations

By following the instructions in the wizard, you will provide information like the authentication method, the GCP project ID, and which data you want to collect, and in exchange, the wizard will tell you which commands you need to run in the GCP console shell or on your laptop (if you have gcloud CLI installed).

Remember that I told you that Splunk O11y will pull ALL the metrics from GCP Monitoring API?! It definitely will! If in the wizard, you are too lazy to pick and choose specific services and just ask for “the lot”, you might end up pulling and PAYING too much.

As you can see above, I did ask for “the lot”, and in a couple of late hours on the first day, Splunk O11y made about 3 times the number of metric calls compared to what it does now on a daily basis.

Anyway, after completing the wizard and manually triggering GCP Run functions (I didn’t want to wait for their next scheduled runs), the dashboards came to life.

As it is part of Splunk Infrastructure Monitoring, you will see all the “infrastructure” metrics, like the number of requests to these functions, CPU and Memory utilisation, etc.. You will not be able to peek “inside” the functions into the Python code to see where the time is being spent (that part we will do later during an APM deployment phase).

Real User Monitoring (RUM)

After having my infrastructure covered by the Splunk O11y Infrastructure Monitoring, I jumped to configure RUM for my front-end.

The way Splunk O11y (or most of the other vendors’) RUM works is by injecting a piece of JavaScript code into the web page so that when a page is loaded, this piece of code collects a bunch of data (like what you clicked, how long did it take for the page to load, etc.) and sends all that valuable information to the analytics platform (Splunk O11y in our case).

To configure RUM in Splunk O11y, you need to obtain a token from: Settings > Access Tokens > Create Token. Make sure to select “RUM token” in the wizard.

Splunk O11y Cloud Create new access token wizard

In the next step, if needed, you can adjust permission (as to who can view the token value) and finally set the token expiration date (default is 30 days, and the maximum is 18 years).

If the new token doesn’t appear on the Access Tokens page straight away, just refresh the page.

On this page, you can see all the tokens with their expiration date (which conveniently highlights if a token is about to expire

Splunk O11y Cloud Access Tokens page

After the token is created, you can start the RUM onboarding wizard by navigating to Data Management > Available Integrations > (search for “rum”) > Browser Instrumentation.

Splunk O11y Cloud Available Integrations - RUM

The Wizard will ask you what RUM token to use, the name of your application and the deployed environment. It will then provide you with the deployment steps based on your deployment/architecture (CDN / self-hosted / NPM). NPM was my choice.

Splunk O11y Cloud RUM Wizard

Note: You can also deploy the Session Replay functionality, but I’ve skipped it for the moment.

Running the suggested npm install @splunk/otel-web --save will install the required package(s), and will also update your package.json and package-lock.json.

package.json

As you can see, the suggested version of splunk-instrumentation.js had hardcoded values (that are either sensitive and/or expected to change from one deployment environment to another)

import SplunkOtelWeb from '@splunk/otel-web';
SplunkOtelWeb.init({
   realm: "au0",
   rumAccessToken: "Super_Secret_Token",
   applicationName: "MentionVault",
   deploymentEnvironment: "DEV"
});

I Codex (after my guidance), improved it by taking out the hardcoded values from the code into environment variables, so now it looks like that

import SplunkOtelWeb from '@splunk/otel-web';

const rumAccessToken = process.env.NEXT_PUBLIC_SPLUNK_RUM_ACCESS_TOKEN;
const deploymentEnvironment = process.env.NEXT_PUBLIC_DEPLOYMENT_ENVIRONMENT;

if (typeof window !== 'undefined') {
  if (!rumAccessToken) {
    console.warn('Splunk RUM access token is not set; skipping instrumentation.');
  }
  else {
    SplunkOtelWeb.init({
        realm: 'au0',
        rumAccessToken,
        applicationName: 'MentionVault',
        deploymentEnvironment,
    });
  }
}

To load it, a small component components/splunk-rum.tsx was created

'use client'

import '@/splunk-instrumentation'

export function SplunkRum() {
  return null
}

and it was then added at the top of the app/layout.tsx.

layout.tsx with Splunk O11y RUM component

After updating the local environment values, restarting the local Next.js server and browsing the (local) website, the Digital Experience dashboards came to life

Splunk O11y Cloud RUM Overview page

You can even see here some JavaScript errors that were happening while I was trying to convert hard-coded values into the env vars.

The sessions are also captured, including the waterfall of what was loaded and clicked on each page.

Splunk O11y Cloud RUM Session Timeline

That’s cool, but wait! How do I deploy Splunk O11y RUM to my NextJS, Vercel-hosted environment(s)? Turns out, pretty easy!

Assuming you already had Vercel configured to build your site from the GitHub repo (and why wouldn’t you?), all that is needed to be done is to add the environment variables to Vercel, and then push your local code to one of the branches in GitHub that is “monitored” by Vercel pipelines.

Note: Make sure to specify different values for the NEXT_PUBLIC_DEPLOYMENT_ENVIRONMENT variable in each Vercel environment.

Vercek Add Environment Variable page

And like that, the Tag Spotlight dashboard started having a bit more colours, and it shows requests from my local environment as well as from the preview and production Vercel-hosted ones.

Splunk O11y Cloud RUM Tag Spotlight page

APM

While RUM provides insights into how real users experience your application, it doesn’t reveal how the (web) server spends its time serving each page request.

APM instrumentation augments either the execution of the code or the code itself.

The first approach is zero-code (A.K.A. automatic) instrumentation, where commonly used libraries (such as requests in Python) are replaced at runtime with instrumented versions. Although no code changes occur when your code calls these libraries, the instrumented versions collect and export telemetry data.

The second approach is code-based instrumentation, where developers use OpenTelemetry (in our case) or vendor-specific, language-specific libraries to instrument their code at key points to generate the required telemetry data.

My preference is to use the first approach, but let’s see how we go.

One more caveat: usually the APM instrumented applications will send their OTEL data to an OTEL collector (for filtering, enrichment, routing, etc.) that in turn will forward the data towards the analytics platform (like Splunk O11y Cloud), but since I am relying on managed services for my application (Vercel and GCP Cloud Run) I didn’t have any infrastructure to deploy the collector, so I am trying to send the data directly to Splunk O11y Cloud APM.

Front-End Instrumentation

Create a new Access token following steps similar to the ones described in the RUM section, but make sure to select INGEST as the token type. Then kick off the APM onboarding wizard by navigating to Data Management > Available Integrations > (search for “apm”) > Node.js (OpenTelemetry) v3.x.

Splunk O11y Cloud Node.JS APM wizard

When entering the details in the wizard, instead of the default OTEL collector running locally (on the same host as the instrumented app), I needed to provide the Splunk O11y Cloud endpoint. The endpoint is https://ingest.<realm>.signalfx.com/v2/trace, where realm is the “location” of your Splunk O11y deployment that you can get from the URL in the browser.

Side note: I guess signalfx is hard-coded somewhere very deep, if Splunk can’t change the URLs to (or add new) Splunk-branded ones 6 years after the acquisition of SignalFx.

In the next step, the wizard will suggest a set of steps to complete to instrument your app.

Splunk O11y Cloud Node.JS APM wizard recommendation

And here the “Fun” begins…

The first 2 are easy; you simply install the package and add some environment variables for the Splunk OTEL to pick up its configuration.

The 3rd one, however, stumbled me a little bit. Since I am not running a “pure” node application but a Next.js one, I didn’t know what I needed to run (instead of node -r @splunk/otel/instrument <your-app.js>) to start the local Next.js server with Splunk OTEL instrumentation. After a bit of Googling/ChatGPT-ing, I landed on updating my package.json start dev script (note the --require and not -r as well as escaping quotes).

 ...
 "scripts": {
    "build": "next build",
    "dev": "NODE_OPTIONS=\"--require @splunk/otel/instrument\" next dev",
 ...

Restarted the server, browsed my site locally, and…. nothing happens :-(.

Following the suggestion in Splunk docs, I enabled OTEL debugging by adding an OTEL_LOG_LEVEL variable to my start script (actually, I created a new dev-debug one)

...
 "scripts": {
    "build": "next build",
    "dev": "NODE_OPTIONS=\"--require @splunk/otel/instrument\" next dev",
    "dev-debug": "OTEL_LOG_LEVEL=debug NODE_OPTIONS=\"--require @splunk/otel/instrument\" next dev",
...

And of course 🤦‍♂️, I realised that I forgot to add the SPLUNK_REALM and SPLUNK_ACCESS_TOKEN to the environment variables.

Note: I probably missed something else, but if I was using an .env.local file to store the OTEL-related environment variables, they were not picked up (while other ones, like Supabase configuration, were), so I needed to pass the values either via the start script in package.json or via the OS (export SPLUNK_REALM=...).

Restarted Next.JS local server, browsed, and … oh Joy! APM dashboard came to life, I could see Traces, like the one below.

Since I already had Splunk O11y RUM configured, I could also drill down (or is it actually pan out?) to the RUM session that triggered this trace:

Now, after I validated that it is capturing traces, I decided to try and deploy it to Vercel, and here the REAL “Fun” begins…

I made sure to set all the necessary env vars in Vercel, but the deployment was failing. The deployment logs were showing this error:

23:13:45.367 node:internal/modules/cjs/loader:1215
23:13:45.368 throw err;
23:13:45.368 ^
23:13:45.368
23:13:45.368 Error: Cannot find module '@splunk/otel/instrument'
23:13:45.368 Require stack:
23:13:45.368 - internal/preload

But why? But how? @splunk/otel is declared in the package.json, so this module should be installed and available, shouldn’t it?

Turn out (according to ChatGPT):

What’s happening – Vercel sets your NODE_OPTIONS for every Node process it spins up, including the ones it runs before npm install. At that point, node_modules doesn’t exist yet, so –require @splunk/otel/instrument throws MODULE_NOT_FOUND and the build aborts.

How to fix it – Don’t point NODE_OPTIONS directly at the package on Vercel. Instead …”

The “instead” part required a bit of trial and error, but eventually landed on the need to create instumentation.ts

export async function register() {
  if (process.env.NEXT_RUNTIME !== 'nodejs') {
    return;
  }

  try {
    const { start } = (eval('require') as NodeJS.Require)(
      '@splunk/otel',
    ) as typeof import('@splunk/otel');

    const logLevel =
      process.env.NEXT_PUBLIC_DEPLOYMENT_ENVIRONMENT === 'production'
        ? 'info'
        : 'debug';

    start({
      logLevel: logLevel,
    });
  } catch (error) {
    const err = error as NodeJS.ErrnoException;
    if (err?.code === 'MODULE_NOT_FOUND') {
      console.warn('Splunk OTel instrumentation not available yet, skipping preload.');
      return;
    }
    throw error;
  }
}

The deployment worked, but the instrumentation didn’t work. Setting the OTEL_LOG_LEVEL=debug in Vercel also didn’t enhance the Vercel Run logs by a bit.

Interestingly, somewhere along the way, my Traces from my local deployment also started showing calls to the local Supabase instance.

Without access to debug the deployment, I had to give up rethink my approach: what is Vercel’s recommended way of using OTEL?

While Vercel has prebuilt integrations for some APM vendors, Splunk O11y Cloud is not one of them. But fear not! There is a way forward; we can use Custom OTEL Exporters.

So, install Vercel’s OTEL wrapper npm i -E @vercel/otel@1.13.1 .

Note: Make sure to pin the @vercel/otelpackage to the latest 1.x version, as v2 has some dependency conflicts with @splunk/otel-web.

And now create/update instrumentation.ts

import { registerOTel, OTLPHttpProtoTraceExporter } from '@vercel/otel';

export function register() {
  registerOTel({
    serviceName: 'MentionVault',
    traceExporter: new OTLPHttpProtoTraceExporter({
      // Splunk O11y OTLP traces endpoint
      url: `https://ingest.${process.env.SPLUNK_REALM}.signalfx.com/v2/trace/otlp`,
      headers: {
        'X-SF-Token': process.env.SPLUNK_ACCESS_TOKEN!, // ingest token
      },
    }),
    attributes: {
      'deployment.environment': process.env.NEXT_PUBLIC_DEPLOYMENT_ENVIRONMENT ?? 'local',
    },
  });
}

Note: We are using OTLPHttpProtoTraceExporter and not the OTLPHttpJsonTraceExporter (as it appears in the example in Vercel docs) since Splunk O11y Cloud expects the OTLP data in the protobuf (and not JSON) format. After redeploying that to Vercel and browsing the hosted website, traces started streaming to the Splunk O11y deployment, with one caveat – the link between APM and RUM is gone ☹. I’ll need to spend some time to see if we can bring it back, but that is another item to add to the TODO list.

GCP Cloud Run (Python) Functions Instrumentation

Details to be updated soon….

From a first glance, simply following the wizard works locally

But the fun part will probably be making sure it works in GCP deployment as well….

TO BE CONTINUED….

The post Splunk O11y Deployment appeared first on ISbyR.

]]>
https://isbyr.com/splunk-o11y-deployment/feed/ 0
Choosing a Cloud Provider for a Bootstrapped StartUp https://isbyr.com/choosing-a-cloud-provider-for-a-bootstrapped-startup/ https://isbyr.com/choosing-a-cloud-provider-for-a-bootstrapped-startup/#respond Thu, 21 Jul 2022 13:44:53 +0000 http://isbyr.com/?p=915 There are many different options for Funded start-ps to get free credits from various cloud providers, but choosing a cloud provider for a Bootstrapped startup is a bit harder. Some might have already a preference towards one cloud provider over another (based on their experience or other factors), but here I’m trying to compare them … Continue reading Choosing a Cloud Provider for a Bootstrapped StartUp

The post Choosing a Cloud Provider for a Bootstrapped StartUp appeared first on ISbyR.

]]>
There are many different options for Funded start-ps to get free credits from various cloud providers, but choosing a cloud provider for a Bootstrapped startup is a bit harder.

Some might have already a preference towards one cloud provider over another (based on their experience or other factors), but here I’m trying to compare them from a pure “free cloud provider credits for a bootstrapped startup” perspective,

Summary

Yes, let me start from the end :-).

GCPAWSAzure
Programme NameGoogle For StartupsActivate FoundersMicrosoft for Startups Founders Hub
Cloud Credits$2K$1K$1K / $5K / $25K / $120K
Period2 years1 year1 year
Support Credits$350 (for 1 year only
Summary of Cloud offerings for Bootstrapped start-ups

Note: Prices are in USD

And now let’s dive into each one of the cloud providers and what they offer for bootstrapped startups

GCP – Google for Startups

Google For Startups Logo

Looks like GCP wasn’t offering any free credits to bootstrapped startups but the good news is that from 2022 (not sure about the actual date) they are!

New in 2022: Calling all bootstrapped startups! We know that at the earliest stages, just getting started can feel like the biggest challenge. Self-funded startups can now receive up to $2,000 USD in Cloud credits to use over two years to help build and grow your company from the ground up on Google Cloud.

Requirements:

  • Founded within 10 years of applying to the program
  • Have a publicly available company website and a unique company email domain.
  • A valid Google Cloud Billing Account ID (e.g. 18-digit alphanumeric hex string like ABC123-DEF456-GHI789) linked to the domain and company email on your application

You are NOT eligible if:

  • Already enrolled in the Google For Startups Cloud Program or have received in excess of $4k Google Cloud credits
  • A company who has IPOd or been acquired
  • An educational institution, government entity, nonprofit, personal blog, dev shop, consultancy, or agency
  • A cryptocurrency mining company, or a company distributing tokens contrary to regulatory guidance in your jurisdiction. For example, companies issuing tokens solely for speculative purposes will not be considered

References:

AWS – Activate Founders

While AWS Activate has 2 Tiers. Activate Founders is the one for bootstrapped startups

Requirements

  • New to AWS Activate Founders
  • Have not previously received credits from AWS Activate Portfolio
  • Have an active AWS Account
  • Startup must be self-funded, unbacked or bootstrapped – no institutional funding or affiliation with an Activate Provider
  • A company website or web profile
  • Startup must be less than 10 years old

References

Azure – Microsoft for Startups Founders Hub

There are a few tiers in Microsoft for Startups Founders Hub and each comes with its own free credits budget.

A few more benefits that Microsoft offers:

  • up to 20 seats for one year’s subscription to GitHub Enterprise
  • access to $1,000 of credits, three free months of an OpenAI API Innovation License and free consultation with an OpenAI expert
StageCloud Credit Budget
Ideate$1,000
Develop$5,000
Grow$25,000
Scale$120,000

Interesting that in their FAQ, they are describing 5 states of a startup. As I read it, you can start applying from the Prototyping state (for an Ideate stage tier) to the Established Market (for a Scale stage tier)

Microsoft for Startups Founders Hub is designed to grow with you. When you complete your application, please choose the stage that best describes your startup’s current state. As you continue to develop and expand your company in the future, you will be able to unlock more benefits and features.

Concept design
You are at the very beginning of your startup journey and are refining your idea and validating your solution by talking to potential users and industry experts. There’s a good chance your idea evolves as you speak to more people, which is completely expected.

Prototyping
You have already gone through some idea validation and are now beginning to build either a wireframe or a prototype to continue user testing. You still may not be certain about moving forward with your product at this stage, and that’s OK!

Building MVP
You know your solution has value and you are jumping into developing a minimally viable product (MVP). Your MVP should be more advanced than your prototype and have enough features planned to make it a functioning solution for potential customers.

MVP in market
You have already launched your MVP product and are focusing on shipping features and winning customers. You should choose this stage if you have developed your product beyond an MVP, but you are still working on acquiring paying customers.

Established market
You have a mature product in the market and have traction in the form of paying customers. If you choose this stage, you should feel you have achieved product market fit and are ready to focus on scaling your company.

Requirements

  • You must be engaged in development of a software-based product or service that will form a core piece of your current or intended business – this software must be owned, not licensed.
  • Your headquarters must reside in the countries covered by our Azure global infrastructure.
  • Your startup must be privately held.
  • Your startup must be a for-profit business.
  • Have a LinkedIn profile

You are NOT eligible if:

  • Your Startup has already received more than a total of $10,000 in free Azure.
  • Your startup has gone through a Series D or later funding round.
  • Your startup is an educational institution, government entity, personal blog, dev shop, consultancy, agency, bitcoin or cryptomining company.

Resources

P.S.

There are more cloud providers than these GCP, AWS and Azure that I’ve touched on here. At some stage, I might extend the comparison to other providers.

The next step would also probably be applying to each one of them with an idea and seeing if any would actually accept it.

We may earn a referral fee for some of the services we recommend on this post/website at no cost to you.

The post Choosing a Cloud Provider for a Bootstrapped StartUp appeared first on ISbyR.

]]>
https://isbyr.com/choosing-a-cloud-provider-for-a-bootstrapped-startup/feed/ 0
How to set up Zoho Mail with your own domain https://isbyr.com/how-to-set-up-zoho-mail-with-your-own-domain/ https://isbyr.com/how-to-set-up-zoho-mail-with-your-own-domain/#comments Sat, 19 Feb 2022 13:53:27 +0000 http://isbyr.com/?p=715 Recently Google has announced that Workspaces will be paid if you want to be able to use the email function of workspaces. So I was looking for some free alternatives and stumbled upon Zoho Mail. I will show you step-by-step how to set up Zoho Mail with your own domain. Psst! Need a new domain? … Continue reading How to set up Zoho Mail with your own domain

The post How to set up Zoho Mail with your own domain appeared first on ISbyR.

]]>
Recently Google has announced that Workspaces will be paid if you want to be able to use the email function of workspaces. So I was looking for some free alternatives and stumbled upon Zoho Mail. I will show you step-by-step how to set up Zoho Mail with your own domain.

Psst! Need a new domain? Try Namecheap

Side Note: a while back I wrote about how to Use Gmail with your own domain for free. That method was utilising Mailgun and creating a free @gmail.com email and it is a bit more technical than some would want

Zoho Mail is part of Zoho Workplace suite and while there are paid plans for standalone email and Workplace, there is also a “Forever Free Plan” for email service only

Here is a brief comparisons of the plans, more details here.

Forever Free
Mail Lite
A$1.65 /A$2.20
Mail Premium
A$6.05
Workplace Standard
A$4.40
Workplace Professional
A$9.35
Email Hosting1 domainUnlimiited DomainsUnlimiited DomainsUnlimiited DomainsUnlimiited Domains
Mail Storage Per User5GB5GB/10GB30GB30GB100GB
Attachment limit (Mail)25MB30MB40MB30MB40MB
Huge Attachment Limit (Mail)up to 250 MBup to 1 GBup to 500 MBup to 1 GB
Zoho CalendarVVVVV
Zoho WorkDriveXXXVV
Zoho Office Suite (Writer, Sheet, Show)XXXVV
Zoho ClickVVVVV
Zoho MeetingXXXUp to 10 participantsUp to 100 participants
Zoho ConnectXXXXV
Zoho Mail Plans Comparison – Prices in AUD per User per Month when billed annually

Creating Zoho account

Browse to Zoho Mail Pricing page and choose the applicable plan.

Zoho Mail Pricing

If you are interested in the free one, then scroll down and select the “Forever Free Plan”

Zoho Mail Forever Free Plan

Fill in your details on the following page to create a Zoho account.

Zoho signup form

After the signup process is complete you can continue with setting up Zoho Mail account

Setting up Zoho Mail

Navigate to Zoho Mail Admin Console,

Create an Organization

Zoho New Organization form

Add a new Domain

Zoho create new Domain

Enter your domain name

Zoho New Domain form

Now to the “fun” part!

In order for Zoho to be able to Receive and Send email using your domain you will need to set DNS records with your hosting or your DNS provider. I must say that this process was very well guided an it has links to detailed guides based if you are with one of the major hosting/DNS providers. The screenshots in the next steps are from Google Cloud Platform (GCP) since I use it as my DNS provider.

Setting TXT DNS record to prove ownership of your domain

Zoho Mail Domain verification TXT DNS record

In GCP navigate to Networking > Network services > Cloud DNS and select your domain

Click ADD RECORD SET

GCP Cloud DNS

Create the TXT record using TXT data copied from Zoho Mail.

Leave TXT name blank (I was trying it with @ but it didn’t work for me)

Set TTL for 3600 seconds

GCP TXT Record

If everything was done correctly, when you click Verify TXT Record in Zoho mail wizard you will be able to proceed

Zoho Mail Domain verification successful

Set the remaining required DNS records

Next setup MX Records

Zoho Mail MX DNS record
GCP Cloud DNS MX records

SPF record and another TXT record for DKIM.

Now you are it’s time to configure users and start sending emails!!

Configure Users and Aliases

In Zoho Mail Admin Console navigate to Users and click CREATE MAIL ACCOUNT for your user.

Zoho Mail User creation

While you are there (or later) you can also setup Aliases for your User under Mail Settings > Email Alias

Zoho Mail Aliases creation

Sending and Receiving emails with your domain for free with Zoho Mail

Now you can send emails using any of the defined aliases

Zoho Mail send email using one of the aliases

We may earn a referral fee for some of the services we recommend on this post/website.

The post How to set up Zoho Mail with your own domain appeared first on ISbyR.

]]>
https://isbyr.com/how-to-set-up-zoho-mail-with-your-own-domain/feed/ 2
Completing the GCP Essentials quest on my daily commute https://isbyr.com/completing-gcp-essentials-quest-daily-commute/ https://isbyr.com/completing-gcp-essentials-quest-daily-commute/#respond Fri, 09 Mar 2018 05:02:05 +0000 http://isbyr.com/?p=263 Everyone attending the Google Cloud OnBoard in Sydney.had got an opportunity to get the GCP Essentials badge by completing a QwikLabs quest for free.  I don’t have much time during the day or evening when at home, so I’ve decided I’ll complete the GCP Essentials quest on my daily commute to work. The Challenge The part … Continue reading Completing the GCP Essentials quest on my daily commute

The post Completing the GCP Essentials quest on my daily commute appeared first on ISbyR.

]]>
Everyone attending the Google Cloud OnBoard in Sydney.had got an opportunity to get the GCP Essentials badge by completing a QwikLabs quest for free.  I don’t have much time during the day or evening when at home, so I’ve decided I’ll complete the GCP Essentials quest on my daily commute to work.

The Challenge

The part of daily commute when I can actually sit and work is about  30 minutes each way and goes through areas with quite a patchy network, so I had a ping -t 8.8.8.8  constantly running to understand whether the  delay that I am getting is from Cloud Shell or Console doing any actual work or from my connectivity to the network.

Patchy Network While doing the GCP Essentials labs

I kind of lied – I did not complete all of the labs on the bus/train.

I did the first lab “Creating a Virtual Machine” at home as I didn’t know the format of the labs and whether the QwikLabs estimated duration is precise.  I also did the “Hello Node Kubernetes” at home as the estimated time was 60 minutes and I am totally new to Kubernetes. The rest of them as well as the associated posts were actually done on the daily commute.

The Journey

The quest consist of 7 or 8 (if you want to do the first lab on once for Linux and once for Windows) labs

Each lab you start will create you a temporary GCP account that you can login with to the Cloud Console and Shell, where you will spend most of the time creating and using different GCP resources. It suggested by QwikLabs to run the labs in Incognito mode, and after a few that’s how my login screen looked like 🙂

The labs are very detailed and are easy to follow. I did have a few things I got stuck with. I am not sure whether I missed something in the instructions or the labs are not up-to-date.

I’ve decided to put my “workarounds” as a separate posts for my own sake as well as for the sake of others who might have similar experience

Also wanted to mention the guys at QwikLabs support. They are very responsive and nice. For some reason when I tried to start the last lab (Set Up Network and HTTP Load Balancers) it became non free and requested me for 7 credits. I’ve emailed them and teh support team sorted everything out pretty quickly.

The Bumps Along The Way

Creating a Virtual Machine – I got stuck on 10/15 points

Compute Engine: Qwik Start – Windows – just a few tips about choosing the right image for the VM as well as what to look for in the logs while you are waiting to be able to RDP to the machine

Creating a Persistent Disk – I decided to “extend” the lab a little bit by adding a file into the persistent disk, blowing away the machine, reattaching the disk to a new machine  and verifying the file is still there

Hello Node Kubernetes – explains how to get the token required  to access the Kubernetes Dashboard

The Finish Line

All the labs are complete and I can wear my “GCP Essentials” badge with honor!

All GCP Essentials labs complete

The post Completing the GCP Essentials quest on my daily commute appeared first on ISbyR.

]]>
https://isbyr.com/completing-gcp-essentials-quest-daily-commute/feed/ 0
Qwiklabs – GCP Essentials – Creating a Persistent Disk – Extending the lab https://isbyr.com/qwiklabs-gcp-essentials-creating-a-persistent-disk-extending-the-lab/ https://isbyr.com/qwiklabs-gcp-essentials-creating-a-persistent-disk-extending-the-lab/#respond Sun, 04 Mar 2018 19:24:19 +0000 http://isbyr.com/?p=253 I really enjoyed the Qwiklabs – GCP Essentials – Creating a Persistent Disk lab, but I think Qwiklabs could extend the lab a bit further by showing how the disk is persistent by blowing  away a VM, starting a new one and reattaching the disk. So I decided to try it. You can follow the steps … Continue reading Qwiklabs – GCP Essentials – Creating a Persistent Disk – Extending the lab

The post Qwiklabs – GCP Essentials – Creating a Persistent Disk – Extending the lab appeared first on ISbyR.

]]>
I really enjoyed the Qwiklabs – GCP Essentials – Creating a Persistent Disk lab, but I think Qwiklabs could extend the lab a bit further by showing how the disk is persistent by blowing  away a VM, starting a new one and reattaching the disk. So I decided to try it. You can follow the steps below  to do just that.

Start here after you’ve finished the “Creating a Persistent Disk” lab.

Let’s create a new folder on our  persistent disk. Run mkdir /mnt/mydisk   (while still SSH-ing to the VM)

Now add a file there with some text vi /mnt/mydisk/tmp/hello.txt

Press i  to switch to edit mode and type in some text (It was “Hi There!” in my case) then follow this sequence to save Esc : wq

It’s time to say goodbye to your VM and delete it.

In the Cloud Shell execute gcloud compute instances delete gcelab –zone=us-central1-c   and you will see something similar to the below

The following instances will be deleted. Any attached disks configured
 to be auto-deleted will be deleted unless they are attached to any
other instances or the `--keep-disks` flag is given and specifies them
 for keeping. Deleting a disk is irreversible and any data on the disk
 will be lost.
 - [gcelab] in [us-central1-c]

Do you want to continue (Y/n)?  y


Deleted [https://www.googleapis.com/compute/v1/projects/qwiklabs-gcp-678cde46183b8e95/zones/us-central1-c/instances/gcelab].

You can also go to the Cloud Console and verify that your “old” VM is gone.

Create a new VM (using Cloud Console or Shell) with the  name of gcelab2

When it is ready use the following command (in the Cloud Shell) to attach the disk:

gcloud compute instances attach-disk gcelab2 –disk mydisk –zone us-central1-c

SSH to the VM

Execute the below commands (similar to the ones you did as part of attaching the persistent disk to gcelab VM) necessary to create  a new mount point and mount the disk. DON’T run the  format (mkfs) command

sudo mkdir /mnt/mydisk
sudo mount -o discard,defaults /dev/disk/by-id/scsi-0Google_PersistentDisk_persisten
t-disk-1 /mnt/mydisk

You can run df -kh  to see the disk mounted and available

google245648_student@gcelab2:~$ df -kh
Filesystem      Size  Used Avail Use% Mounted on
udev            1.8G     0  1.8G   0% /dev
tmpfs           371M  7.5M  364M   2% /run
/dev/sda1       9.8G  983M  8.4G  11% /
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sdb        196G   61M  186G   1% /mnt/mydisk

Don’t forget to check your message from the old VM

google245648_student@gcelab2:~$ cat /mnt/mydisk/tmp/hello.txt 
Hi There!

 

The post Qwiklabs – GCP Essentials – Creating a Persistent Disk – Extending the lab appeared first on ISbyR.

]]>
https://isbyr.com/qwiklabs-gcp-essentials-creating-a-persistent-disk-extending-the-lab/feed/ 0
Qwiklabs – GCP Essentials – Hello Node Kubernetes – Getting the Kubernetes Dashboard Token https://isbyr.com/qwiklabs-gcp-essentials-hello-node-kubernetes-accessing-the-ui/ https://isbyr.com/qwiklabs-gcp-essentials-hello-node-kubernetes-accessing-the-ui/#respond Tue, 27 Feb 2018 11:49:34 +0000 http://isbyr.com/?p=248 The “Hello Node Kubernetes” lab went up until the point where I was supposed to browse to UI. I was required to provide the Kubernetes Dashboard Token The lab says to run the gcloud container clusters get-credentials command and then to start the proxy kubectl proxy –port 8081  after which you should be able to access … Continue reading Qwiklabs – GCP Essentials – Hello Node Kubernetes – Getting the Kubernetes Dashboard Token

The post Qwiklabs – GCP Essentials – Hello Node Kubernetes – Getting the Kubernetes Dashboard Token appeared first on ISbyR.

]]>
The “Hello Node Kubernetes” lab went up until the point where I was supposed to browse to UI. I was required to provide the Kubernetes Dashboard Token

The lab says to run the gcloud container clusters get-credentials command and then to start the proxy kubectl proxy –port 8081  after which you should be able to access the Kubernetes Dashboard UI at https://<YOUR_SPECIFIC_URL>.appspot.com/ui,

However, instead of getting the actual UI I was presented with a “Login” screen.

Kubernetes Dashboard Login screen

I am not sure whether I did something wrong or not, but you can overcome this by providing the Token.

In order to get the token open a new Cloud Shell tab and run the following command cat /home/google244648_student/.kube/config | grep access-token  (of course replace with your user’s home directory)

You will see an output similar to the below

access-token: ya29.GqMBbwUy_ilk0VFXi4pK4-MAC5q-psLmOlt31lhmjc5MkBC4PocfZW7x_-yQVuUcxezLTVwomuLPfxcI-7OVXhyf2hbxLS4IXjHxxsTV_kToVrTpSQpytM6ipPzfwtI4mU0L9G266oBnRLL5FPwb81YiU5P3gq98ViK4HIf-IVbD68WyRvUtBJrUPHTPqyddh7AG5ZUBi7pfRqKdr8PTiGgzfXIJNg

Copy the everything after “access-token: ” and paste it into the Kubernetes UI “Login” screen (don’t forget to select “Token” as the login method)

The post Qwiklabs – GCP Essentials – Hello Node Kubernetes – Getting the Kubernetes Dashboard Token appeared first on ISbyR.

]]>
https://isbyr.com/qwiklabs-gcp-essentials-hello-node-kubernetes-accessing-the-ui/feed/ 0
Qwiklabs – GCP Essentials – Compute Engine Qwik Start – Windows Tips https://isbyr.com/qwiklabs-gcp-essentials-compute-engine-qwik-start-windows-tips/ https://isbyr.com/qwiklabs-gcp-essentials-compute-engine-qwik-start-windows-tips/#comments Tue, 27 Feb 2018 10:25:45 +0000 http://isbyr.com/?p=240 After finishing the Qwiklabs – GCP Essentials – Creating a Virtual Machine lab I was going over the  “Compute Engine Qwik Start – Windows” one and stumbled open a few misalignment between the lab and the actual environment. When you need to choose the OS the lab says: “Choose Windows Server 2012 R2,..”, however you will … Continue reading Qwiklabs – GCP Essentials – Compute Engine Qwik Start – Windows Tips

The post Qwiklabs – GCP Essentials – Compute Engine Qwik Start – Windows Tips appeared first on ISbyR.

]]>
After finishing the Qwiklabs – GCP Essentials – Creating a Virtual Machine lab I was going over the  “Compute Engine Qwik Start – Windows” one and stumbled open a few misalignment between the lab and the actual environment.

When you need to choose the OS the lab says: “Choose Windows Server 2012 R2,..”, however you will not find such to be available.

old boot disk config

What I had were:

actual boot disk config

So I’ve chosen the former.

 

When you start the VM you are advised  that it might take some time before the machine will be RDP-able and you need to run  gcloud compute instances get-serial-port-output instance-1  using Cloud Console and wait for the below output (which will indicate that now you can RDP to it)

2018/02/27 05:27:05 GCEInstanceSetup: ------------------------------------------------------------
2018/02/27 05:27:05 GCEInstanceSetup: Instance setup finished. instance-1 is ready to use. Activation will continue in the backgr
ound.
2018/02/27 05:27:05 GCEInstanceSetup: ------------------------------------------------------------

I’ve scanned the few last lines and couldn’t see it, I run the above command again a few more times (with additional parameter, that specifies to show only the delta from the previous output). but nothing changed. Then I decided to scroll all the way up and this message actually appeared almost in the few first lines of the output

The post Qwiklabs – GCP Essentials – Compute Engine Qwik Start – Windows Tips appeared first on ISbyR.

]]>
https://isbyr.com/qwiklabs-gcp-essentials-compute-engine-qwik-start-windows-tips/feed/ 1
Qwiklabs – GCP Essentials – Creating a Virtual Machine stuck on 10/15 points https://isbyr.com/qwiklabs-gcp-essentials-creating-virtual-machine/ https://isbyr.com/qwiklabs-gcp-essentials-creating-virtual-machine/#comments Mon, 26 Feb 2018 11:16:39 +0000 http://isbyr.com/?p=230 I was using Qwiklabs to learn a bit about Google Cloud Platform (GCP) and started the GCP Essential quest. During the 1st lab (Creating a Virtual Machine) I got stuck on 10/15 points, despite the fact that I thought I’ve completed all the steps as required. What I think went wrong is the step where … Continue reading Qwiklabs – GCP Essentials – Creating a Virtual Machine stuck on 10/15 points

The post Qwiklabs – GCP Essentials – Creating a Virtual Machine stuck on 10/15 points appeared first on ISbyR.

]]>
I was using Qwiklabs to learn a bit about Google Cloud Platform (GCP) and started the GCP Essential quest.
During the 1st lab (Creating a Virtual Machine) I got stuck on 10/15 points, despite the fact that I thought I’ve completed all the steps as required.

What I think went wrong is the step where you need to create the 2nd VM.  The lab says that you can choose any zone for the VM

Being in Sydney, I naturally created it in australia-southeast1-c using the following command from Google Cloud Shell gcloud compute instances create gcelab2 –zone australia-southeast1-c , but when followed all the steps till the end of the lab I noticed that my lab score was still only 10 out of 15.

I quickly deleted the improperly placed VM by running gcloud compute instances delete gcelab2  and re-created it in the suggested zone by running gcloud compute instances create gcelab2 –zone us-central1-c.

Great Success – 15 out of 15 points achieved!!!

The post Qwiklabs – GCP Essentials – Creating a Virtual Machine stuck on 10/15 points appeared first on ISbyR.

]]>
https://isbyr.com/qwiklabs-gcp-essentials-creating-virtual-machine/feed/ 4