Instant Endpoint Observability for Docker and K8

Any software team knows how important it is to quickly detect and resolve issues affecting customers.

To help you, we at Akita have created the fastest time to value monitoring tool, no code changes or custom dashboards needed.

One of our users, JM Doerr from Threads:

"I liked the simplicity of Akita. I came with the expectation that it was really easy to set up and use. You just turn things on, and then you don't have to worry about it. It's up to it."

We're excited to announce that our beta is now open to everyone and we'd love for you to try it out. With the beta, Akita can be set up in 30 minutes to see which API endpoints are in use, which endpoints are slow, and which endpoints are giving errors.

How Akita Simplified Surveillance

Among all the challenges with monitoring and observability, how did we come to focus on time to value? The short answer: our private beta brought us here.

Akita's Goals

Given the major evolutions in software architectures over the past decade, it's no surprise that monitoring needs have also evolved.

loading ="lazy" alt=""/> According to the 2019-2020 RapidAPI Developer Survey.

The growth of the API economy, combined with the rise of service-oriented architectures, means that most web applications are a heap of APIs. In these applications, developers are not only responsible for their own service, but for how their service interacts with other services. Each software application has become its own ecosystem, with its own emerging behaviors. Gone are the days when a developer could focus on simple, monolithic applications.

When I first launched Akita, the goal was simple: to make it easier for users to find and fix issues in production. To help teams pay off watchdog debt faster than they accumulate it, I had some hard constraints on what the first product should look like:

Since we were building Akita to combat complexity with the rise of APIs on heterogeneous technology stacks, the solution needed to be as language-agnostic as possible. To scale easily in complex multi-service environments, the solution should require as little developer intervention as possible.

But I wanted user research to determine both the underlying technology we were using and the questions we were helping developers answer.

Choose the (e)BPF course

Because technology R&D takes time, the first thing we chose was a technology approach. After dozens of interviews with users, we concluded that there was not enough standardization between service meshes or open telemetry. Instead, we chose to start with network packet capture: as long as the software teams were sending API traffic over the network, we could monitor it.

As of mid-2020, we had developed a way to passively listen for unencrypted API traffic, using a technology called Berkeley Packet Filter (BPF), through GoPacket. BPF allowed us to reconstruct network packets to automatically generate API specifications. We wanted this API discovery capability to be the foundation of the rest of our product.

Iterate towards simplicity

From mid-2020 to mid-2022, we iterated over the generation of API specs to determine what was most valuable. As soon as we released v0 of our API spec generator tool, our users told us that was too much information. No, they didn't want to make dashboards out of that data: they wanted less information.

Initially, we thought the way to simplify the product was to let users know how their API behavior was changing. Not only were our users requesting it, but they were also downloading our API specs to differentiate them by hand. This led us to start iterating on prototypes of change monitoring features that informed users of drastic changes to their APIs.

At the same time, we have started making our traffic monitoring algorithms real-time, as up-to-date information is essential for teams to track their APIs. It turned out that the latter, rather than the former, led us to the "aha" moment of our product.

Double the time-to-value

This brought us to a turning point in our private beta: May 2022, when our automatic traffic generation algorithms became near real-time.

Until now, we believed that change analysis was the minimum viable product and immediate monitoring was only the first part of building t...

Instant Endpoint Observability for Docker and K8

Any software team knows how important it is to quickly detect and resolve issues affecting customers.

To help you, we at Akita have created the fastest time to value monitoring tool, no code changes or custom dashboards needed.

One of our users, JM Doerr from Threads:

"I liked the simplicity of Akita. I came with the expectation that it was really easy to set up and use. You just turn things on, and then you don't have to worry about it. It's up to it."

We're excited to announce that our beta is now open to everyone and we'd love for you to try it out. With the beta, Akita can be set up in 30 minutes to see which API endpoints are in use, which endpoints are slow, and which endpoints are giving errors.

How Akita Simplified Surveillance

Among all the challenges with monitoring and observability, how did we come to focus on time to value? The short answer: our private beta brought us here.

Akita's Goals

Given the major evolutions in software architectures over the past decade, it's no surprise that monitoring needs have also evolved.

loading ="lazy" alt=""/> According to the 2019-2020 RapidAPI Developer Survey.

The growth of the API economy, combined with the rise of service-oriented architectures, means that most web applications are a heap of APIs. In these applications, developers are not only responsible for their own service, but for how their service interacts with other services. Each software application has become its own ecosystem, with its own emerging behaviors. Gone are the days when a developer could focus on simple, monolithic applications.

When I first launched Akita, the goal was simple: to make it easier for users to find and fix issues in production. To help teams pay off watchdog debt faster than they accumulate it, I had some hard constraints on what the first product should look like:

Since we were building Akita to combat complexity with the rise of APIs on heterogeneous technology stacks, the solution needed to be as language-agnostic as possible. To scale easily in complex multi-service environments, the solution should require as little developer intervention as possible.

But I wanted user research to determine both the underlying technology we were using and the questions we were helping developers answer.

Choose the (e)BPF course

Because technology R&D takes time, the first thing we chose was a technology approach. After dozens of interviews with users, we concluded that there was not enough standardization between service meshes or open telemetry. Instead, we chose to start with network packet capture: as long as the software teams were sending API traffic over the network, we could monitor it.

As of mid-2020, we had developed a way to passively listen for unencrypted API traffic, using a technology called Berkeley Packet Filter (BPF), through GoPacket. BPF allowed us to reconstruct network packets to automatically generate API specifications. We wanted this API discovery capability to be the foundation of the rest of our product.

Iterate towards simplicity

From mid-2020 to mid-2022, we iterated over the generation of API specs to determine what was most valuable. As soon as we released v0 of our API spec generator tool, our users told us that was too much information. No, they didn't want to make dashboards out of that data: they wanted less information.

Initially, we thought the way to simplify the product was to let users know how their API behavior was changing. Not only were our users requesting it, but they were also downloading our API specs to differentiate them by hand. This led us to start iterating on prototypes of change monitoring features that informed users of drastic changes to their APIs.

At the same time, we have started making our traffic monitoring algorithms real-time, as up-to-date information is essential for teams to track their APIs. It turned out that the latter, rather than the former, led us to the "aha" moment of our product.

Double the time-to-value

This brought us to a turning point in our private beta: May 2022, when our automatic traffic generation algorithms became near real-time.

Until now, we believed that change analysis was the minimum viable product and immediate monitoring was only the first part of building t...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow