Quantcast
Channel: Runscope Blog - API Monitoring and Testing
Viewing all 231 articles
Browse latest View live

Using Google Sheets and Runscope to Run API Tests with Multiple Variable Sets

$
0
0

We frequently run into users who are looking to run an API test multiple times, using a variety of initial variables. For example, the user might be looking to:

  • Test the same request against a list of user ids, where user_id is the only query string parameter that changes.
  • Test edge cases for an API call. Passing a string to an endpoint that expects an int, or passing special characters to it, and making sure that nothing breaks.
  • Share tests and results for multiple variable sets with other teammates for debugging.

We have been able to accommodate this feature for a long time using our Batch Trigger URLs. With a GET request to a Batch Trigger URL, you can pass in an array of objects to specify initial variables for the test, and kick off up to 50 test runs at one time using different sets of variables.

Batch Triggers are pretty easy to use:

The Runscope documentation showing how to make a request to the Batch Trigger URL, and a sample JSON request data.

But what if you want an easier way to do this than to try to format an API request with a bunch of variables and build a JSON array with the different sets? And how do you match up your responses easily to the different sets of variables? What if we could make this easier?

Using Google Sheets and Runscope Batch Trigger URLs

I was interested in making this process easier for Runscope users, so I created a Google Sheet which leverages a Google Apps Script to take sets of variables and use them with a Runscope Batch Trigger URL to kick off requests:

The Google Sheets template, showing the main cells and filled out with sample information from Runscope API test runs.

With this Google Sheet template, you can specify a set of initial variable names and values under Variable Sets. Then, from your Runscope test, you need to get the Trigger ID. It can be found on your test editor, in the environment section under Trigger URL. You only need the Trigger ID, not the full URL:

The Runscope test editor interface, showing the environment settings menu expanded, with the Trigger URL menu selected on the left-hand side, and a box around the necessary Trigger ID for the Google Sheets.

Once you have your Trigger ID, you can replace the "your_test_trigger_id" cell with its value.

With the Trigger ID and variable names/values setup, the Number of Variables and Number of Sets fields should update automatically. You can then click “Run Tests”, and the Runscope tests will be kicked off with each of the variable sets. The first time you run the test, you will have to give Google permission to run the script.

Behind the scenes, the Google Script is taking the variable names and values and creating the array of objects to use in conjunction with the Batch Trigger URL. The script makes the request, and then pastes the links to the results in the Results row.

If you’d like to see if the tests actually passed, you can add a Runscope API Token, and then click “Get Results”. A second script will take each of the Result API Locations, request the results via the Runscope API, and update the Test Pass/Fail values accordingly.

Note that each time you click "Run Tests" it will clear out the results from prior tests. If you want to store the results, you can duplicate the current sheet, or copy and paste the values to another tool.

Get Your Own Google Sheet to Trigger Tests

If you’d like to take a look at the script that powers the template, you can see it under Tools -> Script editor, or here in this gist.

You can find the Google Sheets template here. You will need to make a copy into your own account, and again, give it permission to run the first time you use it. Please reach out to our awesome support team if you need any help!

Happy testing! 


If you're new to API monitoring and testing, learn more about Runscope's cloud-based API testing solution and sign up for your free trial account today.



OpenAPI / Swagger Resource List for API Developers

$
0
0
  The Open API Initiative (also known as OAI) logo  

What is the OpenAPI / Swagger Specification?

The OpenAPI Specification, formerly known as Swagger Specification, is a simple yet powerful way of describing RESTful APIs, in a machine and human readable format, using JSON or YAML. It has a large ecosystem of tools that can help you design, build, document, test, and visualize your APIs.

Here's what a “Hello, World” example looks like:

A Swagger YAML example, showing one endpoint /hello that accepts a name parameter

The OpenAPI spec seems to be the most popular option among the three major formats the community is currently using for describing APIs (RAML and API Blueprint being the two others). It is used by companies of all sizes, including Microsoft, Netflix, IBM, and Lyft, to name a few.

The three major benefits of creating and using an OpenAPI specification are:

  • Generating interactive documentation - API documentation is often overlooked, but it's crucial for developers to understand how to interact with your endpoints, whether they're internal or external code.
  • Human readable and machine readable - The choice of JSON and YAML as the accepted formats is not by accident. Being language-agnostic is an important aspect of the OpenAPI specification widespread use, allowing teams to easily read and share them, while making it easy to create tooling around it for any programming language.
  • Creating SDK for multiple languages - A big challenge that API companies face is providing client libraries for multiple languages and frameworks: Node.js, C#, Python, Ruby, Java, etc., the list could go on forever. OpenAPI tooling such as swagger-codegen can help you do that with little work.

Another benefit of the OpenAPI specification is that it does not require you to rewrite your existing API. You can even find unofficial specs for public APIs. However, OpenAPI is not a one-size-fits-all solution. Not all services can be described by the OpenAPI specification, and it was not intended to cover every possible case of RESTful APIs.

 

Why the two different names?

The Swagger specification development started in 2010, by Wordnik. It was later acquired by SmartBear, in 2015. In that same year, SmartBear created the OpenAPI Initiative (OAI) under the Linux Foundation, and donated the Swagger specification to it, in order to advance a common standard across industries. A number of tech companies, including Google, IBM, and Microsoft, signed on as founding members of the OpenAPI Initiative, and the Swagger Specification was rebranded as the Open API Specification.

 

Why are we writing this?

Crafting good APIs is no easy task. OpenAPI brings a lot of benefits to this process, whether you approach it from a design-first or code-first perspective. The ecosystem of tools can help you generate interactive documentation, SDKs for your APIs in multiple languages, and server stubs.

But getting started with OpenAPI can be challenging. Figuring out how to start writing a spec from the ground up or for existing APIs, which tools, cloud providers, and frameworks support the specification, the differences between 1.2, 2.0, and the upcoming 3.0 versions, and finding open-source generators for your SDKs are only a few of the questions you can run into.

That's why we decided to aggregate the best resources in a single place, and hopefully, guide you on this journey in implementing and making the most of your OpenAPI specification.

 

What is included in here?

We broke down this guide into the following topics:

  • Writing Spec / Design
  • Documentation
  • Generators
  • Servers
  • Clients
  • Testing & Monitoring
  • Gateways / Management
  • Public Specifications

We really hope you find this useful, and please let us know if you think there are any additional resources we can add here.


Writing Spec / Design

Whether you already have an API or are creating one from scratch, writing your OpenAPI specification is the first step in getting the most out of its ecosystem of tools. 

Information and Tutorials

Migration Guides

Upcoming 3.0 Specification

Tools


Documentation

One of the most popular benefits of creating an OpenAPI specification is using it to create an interactive documentation, allowing developers to quickly test your API directly from your docs. There are several tools to choose from here, including the open-source Swagger-UI, to ReadMe.io's more extensive platform.

Information and Tutorials

Public API Examples

Tools


Generators

With existing API projects, you can leverage generators to automatically create an OpenAPI specification and documentation for your project. Several of these projects also support code annotations so you can incorporate those in your docs, and you can find generators for the most popular languages and frameworks: Node.js, Python, C#, Go, etc.

The swagger-codegen project is the official project from the Swagger GitHub organization, and it's used by several companies. It includes options to generate client libraries, server stubs, and docs. We have included it here, and also in the Servers and Clients sections.

Information and Tutorials

GitHub


Servers

The following tools can help you design and build OpenAPI-compliant APIs, while using your specification to handle parts of the plumbing, such as schema validation, input validation, and routing logic.

Information and Tutorials

Tools

GitHub


Clients

Providing client libraries for multiple languages can be a time-consuming commitment for API providers. Luckily, these tools can help you generate libraries automatically from an OpenAPI specification.

This may not be a panacea for all your SDK woes, but it can give you a good starting point on which you can improve with the help of your API developers and users.

Information and Tutorials

Tools

GitHub


Testing & Monitoring

Some of the open-source projects in the previous sections will help you run unit tests against your API just with your OpenAPI specification. 

These projects and tools can help you get a more comprehensive look into your OpenAPI-backed APIs, by running tests against a backend implementation of the API, and providing more information such as performance, response validation, and uptime.

Information and Tutorials

GitHub

Tools


Gateway / Management

API Gateways can help you create, publish, and implement multiple features such as rate limiting, authentication, policies and tiers, just to name a few.

Some of these tools support importing an OpenAPI definition to give you a head-start when deploying a new API, and/or exporting a specification so you can use it for docs or client generation.

Information and Tutorials

Tools


Public Specifications

Looking at how other companies have written their OpenAPI specifications can be helpful when writing your own. Here's a list of a few specifications available on GitHub:

You can also find a list of official and unofficial API specifications on APIs.guru OpenAPI directory. Check the value of the `x-unofficialSpec` parameter to differentiate between them:

And "The API Stack" project by the API Evangelist is another great resource as well:


Special thanks to some awesome folks:

We hope this resource list helps you make the most out of your OpenAPI / Swagger specification. If you have any feedback or links to other projects, tools, and tutorials we could include here, please reach out to us!


Do you have an OpenAPI definition for an API you'd like to start testing and monitoring? Learn more about Runscope's cloud-based API testing solution and sign up for your free trial account today.


Ensuring Microservices Success: Key Things to Consider

$
0
0
erwan-hesry-102070-small.jpg

This is the first post in our Featured Guest Series! Janet Wagner shares eight tips on how your company can ensure success when moving to a microservices architecture.

If you're interested in being a part of our next series, fill out this short form and we'll get in touch with you for our next run.

Microservices are all the rage these days, and you can find blog posts on most technology news sites and blogs about the topic. Major technology companies like Amazon and Netflix have been evangelizing microservices for quite some time now. The number of businesses building microservices architectures to power web and mobile applications is growing at a rapid pace and with good reason. There are significant benefits for companies running applications on microservices architectures such as high availability, high scalability of both the architecture itself and the engineering teams developing it, and ability to innovate and add new features faster. 

If you are one of the many companies that have decided to venture into the world of microservices, there are a lot of things to consider. Do you start with a monolith or microservices architecture? How do you define the scope of each microservice?

Here are eight key aspects that you should consider to ensure you're prepared for building, deploying, and maintaining a microservices architecture.

1. Monolith or Microservices First

There is some debate as to whether it’s best to start with a monolithic architecture first then move to microservices later or just start with a microservices architecture first. Martin Fowler, a software developer and microservices expert, makes a compelling case that it is best to start with a monolith and move to microservices later if warranted. 

Stefan Tilkov, co-founder and principal consultant at innoQ, is convinced that starting out with a monolith is usually the wrong thing to do. He argues that breaking up a monolith into separate individual services later on is tough because its components will have eventually become tightly coupled to each other.

Carefully consider the type of architecture that will work best for your applications. The best approach could be monolith first, microservices first, or another type of architecture altogether. If you're considering refactoring a monolith into microservices, there are many aspects to consider beyond the eight highlighted here. NGINX has a nice series of blog posts covering microservices which includes a post about refactoring a monolith

2. Infrastructure and Operations Investment

Major tech companies like Amazon and Netflix can afford to make significant investments in the infrastructure and operations needed to build, deploy, and maintain microservices. If your company is unable or unwilling to significantly invest in infrastructure and operations, then it is not a good idea to invest in microservices.

3. Peer Code Reviews

One of the challenging aspects of building a microservices architecture is that it often requires many autonomous engineering teams, with each team working on building and deploying one or more services to production. The chances of coding errors occurring are far lower for a company with only a few autonomous engineering teams and microservices than a company with hundreds of microservices and dozens of engineers. Companies should consider conducting peer code reviews regularly which can help reduce the number of coding errors ensuring the success of their microservices architecture. 

4. Open Source Software or Build from Scratch

There are quite a few open source software projects that provide tools for building microservices architectures. Netflix provides a number of open source software projects that companies can use to build their own microservices architectures. Netflix isn't the only company doing that; other companies include Capgemini, Mashape, Pivotal, and Red Hat.

Companies can use these tools so that they don’t have to build microservices architectures from scratch; however, it may be necessary to build some components of a microservices architecture from scratch to meet specific application requirements.

5. Service Size

Some companies set a limit on how large each service should be e.g. no more than 100 lines of code. The size of the service doesn’t matter; however, what does matter is that each service does one task, and does that task well. If a service ends up doing more than one task, then the company should consider breaking the service into separate microservices that perform one task each. Each service should also typically only have one datastore. Runscope co-founder and CEO John Sheehan discusses service size, datastores, and other aspects of microservices on his APIDays talk about "Scale-oriented Architecture with APIs"

6. Continuous Delivery

Continuous delivery is essential when it comes to ensuring microservices success. Continuous delivery is an approach to software development where teams release small sets of changes frequently. It requires a DevOps culture, and that much of the delivery process be automated. Releasing small sets of changes frequently reduces the chances that errors will occur and makes it easier to fix errors when they do. 

In an article about microservices tradeoffs on his website, Fowler explains:

“Being able to swiftly deploy small independent units is a great boon for development, but it puts additional strain on operations as half-a-dozen applications now turn into hundreds of little microservices. Many organizations will find the difficulty of handling such a swarm of rapidly changing tools to be prohibitive.”

Fowler goes on to say “This reinforces the important role of continuous delivery. While continuous delivery is a valuable skill for monoliths, one that's almost always worth the effort to get, it becomes essential for a serious microservices setup.”

7. Service Discovery

Service discovery is an important component when it comes to a microservices architecture. Microservices instances are constantly coming and going making it difficult, if not impossible, for engineers to know where microservices are at any given time. Engineers must be able to find and manage the microservices running your applications. A service discovery system allows services to be detected automatically and help them find each other. With a service discovery system in place, engineers don’t have to know where every service is located.

8. Monitoring and Visualization

Applications powered by a microservices architecture are heterogeneous and distributed by nature, making them difficult to monitor, visualize, and analyze. Traditional application monitoring and performance management (APM) solutions are not designed to handle complex distributed applications.

However, today there are APM solutions designed specifically for monitoring, visualizing, and managing complex distributed applications. Engineers working on the development and deployment of microservices need modern APM solutions that will help them identify the root cause of service errors and failures, determine and reduce service dependency bottlenecks, and figure out why an application isn’t working the way it’s supposed to.

Choosing an effective tool to monitor, visualize, and analyze your microservices architecture is crucial. Continuous, real-time monitoring and analysis of your microservices is an essential component of microservices success.

Choose Your Application Architecture Carefully

These are just some of the things companies should consider before making the leap to microservices. If your company decides to make that leap, these key considerations could make the difference between ensuring microservices success or abject failure. Choose your application architecture carefully as it’s not easy to move from one type of architecture to another.

You Might Not Need GraphQL

$
0
0
GraphQL logo

This is the second post in our Featured Guest Series! Phil Sturgeon takes a look at some GraphQL features and shares different ways to implement them for your endpoint-based APIs.

If you're interested in being a part of our next series, fill out this short form and we'll get in touch with you for our next run.

Do you like the look of GraphQL, but have an existing REST/RPC API that you don't want to ditch? GraphQL definitely has some cool features and benefits. Those are all bundled in one package, with a nice marketing site, documenting how to do all the cool stuff, which makes GraphQL seem more attractive to many.

Obviously seeing as GraphQL was built by Facebook, makers of the RESTish Graph API, they're familiar with various endpoint-based API concepts. Many of those existing concepts were used as inspiration for GraphQL functionality. Other concepts were carbon copied straight into GraphQL.

Facebook has experimented with various different approaches to sharing all their data between apps; remember FQL? Executing SQL-like syntax over a GET endpoint was a bit odd.

GET /fql?q=SELECT%2Buid2%2BFROM%2Bfriend%2BWHERE%2Buid1%3Dme()&access_token=...

Facebook got a bit fed up with having a one-endpoint-based approach to get data and this other totally different thing, as they both require different code. As such, GraphQL was created as a middle-ground between endpoint-based APIs and FQL, the latter being an approach most teams would never consider - or want.

That said, Facebook (and others like GitHub) switching from RESTish to GraphQL makes folks consider GraphQL as a replacement for REST. It is not. It is an alternative. 

Whilst the use-cases for the sort of API you'd build in GraphQL are quite different from those you'd build with REST (more on this) or RPC, that does not stop some folks jumping on the hot new thing. If you can withhold the urge to jump on the shiny new thing but are interested in some of the functionality GraphQL has to offer, you can brush up your endpoint-based APIs with these excellent existing concepts.

Note: This article uses the term REST as defined in Roy Fielding's dissertation, RESTish to mean it's somewhere on the Richardson Maturity Model but didn't make it to the top, and endpoint-based APIs to mean any REST/RESTish/RPC/etc. API that uses endpoints instead of POSTing to a single /graphql endpoint.

Sparse Fieldsets / Partials

GraphQL allows you to specify the fields you would like to be returned, allowing you to skip all data that is not relevant to your response. This makes the request a little bit faster to download over the network, as the tubes do not get quite so full.

Photo from GraphQL.org, showing specified fields in requests/responses

REST certainly does not talk about this out of the box, because REST does not concern itself with such implementation specifics. The practice is however very common in the endpoint-based API world.

A common standard for REST APIs to implement is JSON-API, which talks about sparse fieldsets. The basic idea is that you can specify the fields in the request:

GET /articles?fields[articles]=title,body

In the past YouTube had some really whacky partial syntax for this sort of thing:

GET /feeds/api/users/default/uploads?fields=entry(title,gd:comments,yt:statistics)

Facebook also has this in their Graph API:

GET /frankcarter?fields=id,name,picture

Read more on building your own implementation of this if you're interested, but I would stick to using existing solutions like ActiveModel::Serializer (Rails), tobscure/json-api (PHP), etc. which handle standard implementations for you.

Types / Schemas

Folks talking about GraphQL get really excited about the concept of Types. JSON can be a little vague when it comes to types. Requests require a lot of validation, and responses require clarification.

Some weakly typed languages might send a numeric string when they really meant to send an integer. A numeric JSON response field could look like an integer one minute, but then a wild decimal place appears.

GraphQL APIs are organized in terms of types and fields, not endpoints. Access the full capabilities of your data from a single endpoint. GraphQL uses types to ensure Apps only ask for what’s possible and provide clear and helpful errors. Source: graphql.org

Many endpoint-based API developers solve this with HTTP documentation using tools like MSON to describe their data, but another approach is to use JSON Schema.

JSON Schema is very cool and lets you describe your JSON using a JSON metadata file. This metadata file can be linked into your main JSON outputs, letting JSON Schema-aware clients discover the metadata over the wire. One of the many benefits of this is allowing client apps (like a web frontend) to validate their form data using the exact same rules as the server-side, without needing to go over the wire. 😲

Taken from JSON Schema examples:

{
    "title": "Person",
    "type": "object",
    "properties": {
        "firstName": {
            "type": "string"
        },
        "lastName": {
            "type": "string"
        },
        "age": {
            "description": "Age in years",
            "type": "integer",
            "minimum": 0
        }
    },
    "required": ["firstName", "lastName"]
}

Comically, all of these concepts (type systems, schemas, metadata in general, etc.) is exactly what most people hated about SOAP. Instead of just the payload, you had to mess about with a WSDL, and that lead folks to enjoy REST. Well, in REST, using something like JSON Schema, you have the choice of using them, as do your clients.

Aren't interested in JSON? Look into Protocol Buffers like ProtoBuff or Cap'n Proto . They're an identical concept to the GraphQL type system, and have been implemented in a bajillion languages too.

Evolution / Versioning

Versioning is a really big, muddy, awful topic in the world of APIs, and most of us agree that every approach is a minefield .

Some folks will version in the URI, making /v1/foos and /v1/bars, then make /v2/foos and /v2/bars. If nothing changed between v1 and v2 for bars, anyone using the API doesn't know that, and has to go read some documentation to find out before they can upgrade. That leads to slow uptake, and now you have two endpoints for ages.

Some folks will start the same, with /v1/foos and /v1/bars, but then only create /v2/foos, leaving /v1/bars in place. That can be slightly better, but if (for example) the Api::v1::AppController and Api::v2::AppController have ever-so-slightly different error formats and your client is not aware of that, then a v1 or v2 error might pop up when the client is only coded to support v2 or v1. I've seen this cause a JavaScript error in production that broke the app.

That whole mess of nonsense can be avoided by not versioning your API. As daft as this might initially sound, if you can avoid changing a contract, that means less work for clients. The server can handle converting data from one format to another, or a new representation can eventually be created to replace the old representation. Replacing representations and fields carefully over time as things change is called evolution .

At a previous company offering crowdsourced carpooling, we switched from /matches to /riders, and internally those two representations shared a lot of code. "Matches" was deprecated, and clients started using the new "riders" concept. Over a few months, the internals changed to a much cleaner solution for riders. We eventually dropped the matches endpoint/serializers/logic entirely, without the riders contract changing. That helped us move fast on the code, but keep our contracts relevant for our clients until they didn't need them anymore.

That is not always possible in a rapid-application environment where things change drastically all the time. Startups love their "agile philosophies", pivots, etc., which can throw evolution out the window. As somebody who has worked for those companies, we would simply make a new API (copy, paste, tweak) and throw the old one out once a new iPhone app was launched and usage dropped below acceptable levels. There was no interest in "keeping the service alive for decades", which is a core tenant of REST, but that's ok, we were RESTish.

As pointed out in my GraphQL vs REST: Overview , easily being able to track field usage by clients is something that gives GraphQL an edge on most endpoint-based APIs. If a client wants to use field "foo" and you want to remove it, you know that their app will break.

Earlier in the article, we looked at sparse fieldsets, which can help here. If the endpoint-based API offers sparse fieldsets as an option, and clients use them, there will be trackable insight into which clients are using specific fields just like GraphQL. One downside here is that only clients requesting ?fields= will be detectable, unless the API goes a step further and requires the use of ?fields=!

I don't know if I would do it, but it's an option.

Query Language

GraphQL is primarily a query language, but if you'd like an endpoint-based API to have a query language, you can give it one.

OData has a strong slogan: the best way to REST. That's a big claim, but OData provides a lot of stuff that people seem to like about GraphQL. Not only does it offer machine-readable metadata similar to JSON Schema, and a powerful explorer similar to GraphiQL , but it provides a syntax and tooling to allow you to use a query language:

GET /Airports?$filter=contains(Location/Address, 'San Francisco')

Or even:

GET serviceRoot/People?$filter=Emails/any(s:endswith(s, 'contoso.com'))

OData has some other boring grown-up benefits, like allowing your endpoint-based API to be the source of truth for Salesforce, utilizing External Objects. Basically, instead of making janky sync logic between your applications and theirs, a custom object can be created to let the data live only in the one OData API, but still look like it's sitting in Salesforce. Guess what I'm working on at the moment.

Data Inclusion / Compound Documents

JSON-API suggests an ability to include related data from multiple resources in a single HTTP request. That is a big pro for some, as it reduces the number of HTTP requests, which under the right conditions can often speed things up for the client. It could also slow it down a bunch, but usually, it's a helper.

These includes are very similar to the nested field queries possible in GraphQL. When using JSON-API not only can you fetch related resource representations, but you can trim those related representations down using sparse fieldsets too:

GET /articles?include=author&fields[articles]=title,body&fields[people]=name

That will get you a list of articles with only the title and body fields, then the authors will be included with only their name.

Endpoint-based APIs offering compound documents handle it in a myriad of ways, but again the JSON-API approach is a common one. The JSON-API approach makes some clients sad because data is "side-loaded", which basically means included resources are jammed into a single array.

Excuse the large JSON blob, but it's important to understand the concept:

{
  "data": [{
    "type": "articles",
    "id": "1",
    "attributes": {
      "title": "JSON API paints my bikeshed!"
    },
    "links": {
      "self": "http://example.com/articles/1"
    },
    "relationships": {
      "author": {
        "links": {
          "self": "http://example.com/articles/1/relationships/author",
          "related": "http://example.com/articles/1/author"
        },
        "data": { "type": "people", "id": "9" }
      },
      "comments": {
        "links": {
          "self": "http://example.com/articles/1/relationships/comments",
          "related": "http://example.com/articles/1/comments"
        },
        "data": [
          { "type": "comments", "id": "5" },
          { "type": "comments", "id": "12" }
        ]
      }
    }
  }],
  "included": [{
    "type": "people",
    "id": "9",
    "attributes": {
      "first-name": "Dan",
      "last-name": "Gebhardt",
      "twitter": "dgeb"
    },
    "links": {
      "self": "http://example.com/people/9"
    }
  }, {
    "type": "comments",
    "id": "5",
    "attributes": {
      "body": "First!"
    },
    "relationships": {
      "author": {
        "data": { "type": "people", "id": "2" }
      }
    },
    "links": {
      "self": "http://example.com/comments/5"
    }
  }, {
    "type": "comments",
    "id": "12",
    "attributes": {
      "body": "I like XML better"
    },
    "relationships": {
      "author": {
        "data": { "type": "people", "id": "9" }
      }
    },
    "links": {
      "self": "http://example.com/comments/12"
    }
  }]
}

That article "1" has two relationship types, author and comment. These relationships might have one name, and the actual resource type could be another, so author is the relationship name but people is the data type. Cool.

So, if they have "data": { "type": "people", "id": "9" } and { "type": "comments", "id": "5" }, { "type": "comments", "id": "12" }, that means if ?include=author,comments is in the query string, it will expand those relationships, "including" the data in the "included" section of the JSON body.

These included items are all just shoved into a single array, with no hierarchy or concept of how they relate to the article. Anyone who calls this API will not just be able to call the data:

response = client.get('/articles?include=author,comments')

article = response.body
comments = article.comments
author = article.author

Instead, there needs to be some logic that pulled out all the comments and all the authors, then in a loop you could stitch them back together. Writing this is awful, but of course loads of people have built generic abstraction layers in various languages to allow you not to have to.

I used to really hate this, but respect the fact that it reduces data going over the wire, by de-duplicating the expanded forms of the same resource. E.g.: Instead of returning the same author details multiple times, it's just there once.

Side-loading is a common convention for many endpoint-based APIs, but it has nothing to do with REST. That is simply how JSON-API happens to do things and is one common standard amongst a few. An endpoint-based API can easily nest data in a more relational way just like GraphQL. I built a tool to do this in PHP years back called Fractal (before going over to the JSON-API-side).

Summary

I don't think people need to spend a huge amount of time trying to make their endpoint-based APIs do all this stuff just to be cool, but I do think there are concepts in GraphQL that resonate with people who might not know they’re already available to them. As impressed as I am with GraphQL, it is not always the shiny magical hammer-for-everything some people think it is, and making informed decisions is important.

Knowing about these tools and concepts can help people push back against an overzealous switch to a whole new system, because the folks advocating it were unaware that a lot of the concepts used in GraphQL are not entirely new or unique to GraphQL itself.

That said, I do of course think having all of these concepts implemented and documented in one single “package” like GraphQL is super handy. GraphQL removes the arguing or confusion about what is “the most RESTful way to do something”, as it has a spec and example implementations.

A lot of APIs that are merely RESTish could certainly be GraphQL, but an actual REST API with HATEOAS would not make sense trying to jam itself into the GraphQL paradigm. State Transfer and Query Languages are different things, but as I've shown, you can blur the lines a little if you're interested.

Monolith to Microservices: Transforming a web-scale, real-world e-commerce platform using the Strangler Pattern

$
0
0

This is the third post in our Featured Guest Series! Kristen Womack shares the story of the transformation Best Buy went through when moving from a monolithic application to a microservices architecture, and the main secrets that made it successful.

If you're interested in being a part of our next series, fill out this short form and we'll get in touch with you for our next run.

Microservices are hot these days. According to Google Trends, searches for “microservices” were almost non-existent five years ago. And here we are today:

Maybe your team is one of the many now moving toward a microservices architecture. And with good reason: breaking a monolithic application into smaller, simpler services increases your project’s velocity, your ability to scale, and allows you to react more quickly to change.

In 2011, I was part of the Best Buy transformation that broke down the monolith of the website into separate web services. This is the story of that transition from monolith to microservices at one of the world’s biggest e-commerce platforms—what we learned, what worked, and how you can learn from our experience—while highlighting two keys in our transformation success: focusing on culture, and using Martin Fowler's Strangler Pattern.

But first: How did we get here?

A monolithic code base is common, and often a byproduct of rapid product success. But as the application grows, there is a breaking point where it’s difficult to keep up with the pace of market demand. A monolithic codebase becomes too cumbersome to be efficient in delivering new features and keeping ahead of the market. Integrations with other applications also become expensive. The need for companies to ease integration development with partners created a boom in public API growth from 2008 to 2013. Interestingly, the microservices trend seemed to follow shortly thereafter. 

ProgrammableWeb's API Growth graph, showing the rapid increase from 0 to almost 10.000 APIs between 2005 and 2013.

Transition and Transformation at a Major Retailer

In the late 90’s, Best Buy had a website that was developed by Best Buy Concepts, Inc. In 2004, Best Buy hired an outside company called Virtucom to revamp their website (as did several other retailers). The website was built using ATG technology and a SQL database. Pretty standard for the 2000s.

A decade later, the application ecosystem was overly complex, making simple changes and updates difficult. And due to the nature of outsourcing development, people with critical information and context about the systems walked out the door at the end of each project.

Three images from Best Buy's homepage, from the years 1996, 2006, and 2017.

In 2010, Best Buy kicked off an initiative to transform their e-commerce platform. The mission was to break the monolithic, tightly coupled application into microservices so that we could more quickly deploy new features and respond to market changes in the retail landscape. 

Our approach to transformation was based on three pillars: 

  • People
  • Process
  • Technology 

At first, that might seem counter-intuitive. Developers often conclude that technology is the answer to everything, but people and processes are prerequisites to using technology effectively, especially software development. (Don’t believe me? Check out Conway’s Law.)

One of the first things we did was to create a dependency graph of the interconnected modules within the monolithic platform to get an understanding of what we were working with. The graph was so congested and vast that in order to see the full shape of the drawing, you had to zoom out so far you could no longer read any text.

So, we embarked on a journey to rewrite Best Buy dot com—one of the largest e-commerce sites in the world. (At the time, Best Buy was number 10 on the Internet Retailer Top 500.) No big deal, right?

When rewriting an important software system, you can do one of two things: 

  • Rebuild the app from scratch in parallel 

Or:

  • Write around the old app and start to cut over to new services 

You might think the first option is a no-brainer: “Just make a new one that does what the old one did.” I’ve witnessed teams try to do this; it just doesn’t work. When it does work, it takes a year or more, the customer/user cutover is difficult and noticeable, and revenue is lost in the process. 

The (better) alternative to a complete rebuild is to slowly and systematically take over the old application. As you create new, independent services, you kill off the old corresponding services. This realizes benefits more quickly while reducing risk. This was dubbed the “Strangler Pattern” by Martin Fowler. 

The process looks like this:

An image showing the step by step transformation of an old application (represented by a red square), slowly changing to a new application (blue square). They're both behind a load balancer, while at first we only have the old application, next they both share the load, and finally there's only the blue application left.

When we started the Best Buy project, our goal was to create microservices from the monolith. 

The “before” state was annual deployments (yes, it was so painful that it became a once a year activity) that involved 60+ people, took more than 8 hours, and was always performed overnight. We had single points of failure in our application: spikes in traffic could bring the site down; contract teams spun up for projects and then left (with all the system knowledge) once the project was launched; the website didn’t support inventory complexities like buying open box products and shipping from stores; bugs lurked in every corner.

Deployments and Scaling

We looked to continuous deployment systems like Etsy and architectural icons like Netflix with the goal to deploy continuously, or—at a minimum—every two weeks. We were a long way from this, and the beginning was painful, but it was helpful to have a north star and others to look up to.

At the time, Best Buy was purchasing more and more infrastructure to support growing holiday sales each year. Holiday traffic can spike higher over the Black Friday weekend than the entire first quarter of the year, but the rest of the time that excess server capacity sat there, unused. And while we had more than one data center and a disaster recovery center, we didn’t have redundancy without heavy lifting. 

To mitigate waste and risk, we moved our web services to the cloud, where we could scale new servers on demand and build an automated failover in different geographic regions. Graceful failover was one of the main tenets of our transformation. We didn’t want production failures or traffic scale to impact the business. This was important for elastic server capacity and graceful degradation of a service.

My team was in charge of building the product catalog APIs. As we methodically built around the old SQL database system with the new application, a distributed NoSQL key-value data store (Riak), we started moving over production traffic to the new services.

Early on in our development, we experienced a major failure where we lost a node in the Riak ring. Where the legacy database was previously a single point of failure, Riak handled the failure gracefully without a single customer noticing anything amiss. We only discovered the issue with monitoring. This was an exciting milestone!

Realizing Benefits Early

Within a few months, the product catalog rewrite had enough functionality for our first customer: the Enterprise Returns application. When customers come into the store with a broken TV and a Geek Squad plan, they would get a gift card to purchase a new television of the same value.

Calculating that value was complex because of how fast television technology developed. The returns app called the catalog web service to search comparable televisions by dimension, technology, and price. By delivering a valuable slice of the service, we immediately made a valuable impact for the company and customers. 

Within the next six months we added more functionality, selecting the highest priority categories of the product catalog, to serve our second customer: the public APIs for external developers. Previous to the product catalog web service, the public APIs were only able to get a refresh of the catalog changes every 24 hours, which caused problematic data latency issues. After we had enough categories available in our service, they were able to switch over and realize 5-15 minute data latency with deltas to keep the public APIs data fresh and as close to real-time change.

By having these two production clients early, we were able to learn a lot and improve our service and deployment pipeline before the entire website cutover from the tightly coupled SQL database, to the product catalog web service built on the Riak NoSQL data store.

Eventually, the end state was leveraging the product catalog across the enterprise, but if we had not used the Strangler Pattern, it would have taken us years. And by the time we delivered the rewrite, so much of the market and organizational needs would have changed. Using the Strangler Pattern allowed us to absorb change and learn as it was happening.

Feedback Loops and Learning Cycles

We created a development process that governed how we iterated through our work for learning, not for deadlines. That’s not to say we didn’t have deadlines—we did, and we took them quite seriously—but our iteration goals were to learn how to continuously improve our work and delivery, reduce noise, and detect important signals as early as possible. 

Intentionally designed feedback loops were a big part of accelerating learning to reduce waste. This meant committing code early and often, and leaving no uncommitted code at the end of the night. This disciplined process, which included developer-written acceptance tests and test-first driven development, encouraged high quality. Running the test suite through Jenkins allowed for immediate feedback. 

Pair programming was one of the things that made us most successful. Developers worked through thorny coding and design challenges more quickly than they could have alone, and caught mistakes that would have become pesky bugs later. 

We documented everything (through the wiki as well as through our test-suite) so others could pick up where we left off, and also rotated through work to discourage siloing and single-threaded responsibility.

Key takeaway: Creating microservices is more about team organization and approach than it is about technology.

Through standardization, we focused on moving the culture from “fire-fighting” and “heroism” to a methodical, purposeful, calm process. With this discipline came freedom from being reactionary and the ability to be proactive. We were able to more clearly see the whole picture rather than the hot problem of the day. 

We thought a lot about Conway’s Law and purposefully architected organization into smaller teams by web service: one team for the product catalog, one team for commerce, one team for the product display UI, and so on. While maybe not true microservices, this was our first division of functional services. Each team was autonomous with individual goals of the overall platform mission. 

The first team to have the most robust testing suite was the product catalog team—Magellan. They were also the first to bypass deployment checks, because we had automated the human function of validating the service. No more 3 AM validation checks after the deployment! Beautiful: humans can sleep while computers work. 

Often in startups and in tech, development teams balk at process. But one of the biggest lessons I took from my time with the Platform Transformation team is that collaboration, communication, and discipline will give you far more freedom and speed than an ad-hoc, no-process, do-what-you-want attitude and culture.

Working hard on the culture allowed many of the teams to enjoy accelerated development and freedoms. Our focus was always on learning how to improve. 

We used our retros to get critical feedback about what was working and what wasn’t. We talked about what went well, what didn’t go well and what we want to change in the next iteration. And we committed to regularly updating our team manifesto. If there was something there that we weren’t doing, we would ask ourselves “do we want to start doing it or remove it from the manifesto?” We valued refactoring our processes as much we did our code.

But feedback loops are more than talking or retros—they’re your test suites, logs, traffic patterns, demos, velocity, customer feedback and more.

Conclusion

Transitioning to microservices is tough but worthwhile. You want to design your organization in a way that will manifest the ecosystem architecture you want. Success will depend more on people and process than technology—so be ready to listen and learn everyday, emphasizing efficiency and discipline over dogma. 

Focusing on building around the frays of the old system will reduce risk in an important system rewrite, whereas building in parallel will be high pressure, high risk, and often come with high opportunity costs. The Strangler Pattern is a proven approach to tackling legacy software.

There are now several books on microservices and production-ready software:

Building microservices is like future-proofing your platform, in that the services are inherently easy to strangle down the road. Because as Martin Fowler points out, “All we are doing is writing tomorrow's legacy software today.”

4 Methods to Make Your API Truly Discoverable

$
0
0

This is the fourth post in our Featured Guest Series! Bill Doerrfeld shares 4 key techniques that API advocates and product owners can use to increase their APIs discoverability. 

If you're interested in being a part of our next series, fill out this short form and we'll get in touch with you for our next run.

The software industry has shifted to truly embrace web APIs as products, rather than ancillary services alongside the traditional business model. Because of that, API providers are naturally placing greater emphasis on marketing these services and creating a new identity that caters well to third-party developers.

If you are an API advocate or product owner, you may feel the pressure to get your service into the hands of developers by spreading the good word at hackathons, webinars, or attending API-related events. Word of mouth is an excellent tool, but before you start printing business cards, there are other actions you can take to naturally increase the discoverability of your service. 

In this post, we’ll review some methods and tools that API providers can use to improve the visibility of a web API —  helpful for API owners in the process of releasing a new public web API or promoting an existing one. We’ll explore:

  • API portals from an SEO perspective,
  • Profiling an API within developer directories,
  • The viability of API discovery formats,
  • and more...

Implementing these API discoverability tactics takes little effort, and little cost, and could jettison your program to new heights, hopefully bringing in more happy developers and end users in the process.

1. Create an API definition

Open API Initiative (formerly known as Swagger) logo

First off, for anyone to discover your API, you will need transparent, public documentation. Using an API specification format like the OpenAPI Specification opens the door to embrace many helpful tools out there for generating beautiful documentation, including Swagger UI, ReDoc, or Slate, among others.

Having an API definition has other benefits too. It makes it easier to generate SDKs and code libraries in specific programming languages to help cater to the needs of your developer base. Widening the potential use cases makes the impact of the discovery funnel that much more broad.

2. Leverage API discovery formats to make your API machine readable

There are literally thousands of public APIs out there. That is why leaders in the API space have sought to create standardized, machine-readable descriptions of API operations to help index them, and thus make them more searchable and discoverable.  

Automating the API discovery process first saw some momentum with the APIs.json project led by API Evangelist and 3Scale. The idea was to provide a meta description of API operations in a standardized way:

“APIs.json is a machine readable approach that API providers can use to describe their API operations, similar to how websites are described using sitemap.xml.”

If adopted, the APIs.json approach could be a simple boost to automatically index the thousands of APIs in existence. However, though the project has some adoption, including Fitbit and Trade.gov, it has yet to become an industry standard mechanism for making API operations more transparent.

The APIs.guru directory website, showing a big yellow button "Add API".

An alluring solution to the lack of a conformable API discovery format is APIs.guru, which is slowly becoming a “Wikipedia” of APIs. The benefit to APIs.guru is that if API providers already have an API definition, they can automatically index it in the database. The APIs.guru database powers directories like Any-API, therefore spreading your API to many more potential communities.

3. Profile your API in the growing number of API directories

The ProgrammableWeb API Directory list, showing the Featured APIs list, Popular Categories, and Other Directories which includes SDKs, Sample Source Code, Libraries, Frameworks, and Mashups.

Finding the right API for a project can be difficult. That’s why sites have emerged to help developers sort through the chaos, and comparison shop for the perfect fit.

While the actual conversion rate has yet to be measured, it’s no doubt that adding your API to these locations can significantly increase its visibility. You can think of taking advantage of all these API databases like social media marketing; though your main traffic may come from Twitter, it doesn’t hurt to have a Facebook account either. ;)

  • ProgrammableWeb: The web’s most respected source for new API releases and API industry news. Their newsletter also lists new APIs added to the directory every week.
  • APIs.guru:  APIs.guru is establishing itself as an open source “Wikipedia” for web APIs. It takes your API definition, indexes it, and exposes it to other sites.
  • Mashape: A large API marketplace. You can index any API in the directory, even if you don’t run Mashape Kong management.
  • APIs.io: If you’re following the words of the API Evangelist and have created an APIs.json file, having a presence on APIs.io should be an easy process.
  • Hitch HQ: Developer users can follow APIs and be notified when they change. Hitch also works with API providers to grow their communities.
  • API Harmony: IBM’s curated API collection also allows third party contributions.
  • Exicon Directory: A medium sized API directory that aggregates many different industries.
  • API Katalogen: A Swedish API directory with many great open data resources. May need Google translate for this one!
  • SDKs.io: Profile your SDK here to increase its visibility. Powered by the developer experience platform APIMatic.
  • Rapid-API.com: A growing API database with a nice interface for discovering APIs. 
  • Any-API: Powered by APIs.guru, offering “documentation and test consoles for over 300 public APIs.”
  • API For That: Tweets new APIs added to the directory. 

Profiling your API in public directories can increase the visibility of your program and open it up to new communities. Some recommend generating a Swagger/OpenAPI, RAML, or API Blueprint definition, which can be auto-generated from many API management portals.

4. Improve SEO with target keyword copy in developer home pages & docs

Another important aspect of improving discoverability is the API home page, as the keywords, content, and URLs that we feed search engines are vitally important to boosting search relevancy.

Many API providers may not realize the high value of their copy on developer home pages. Words hold the true intelligence as they inform both prospective users and machine-driven search engines. When writing copy, use a range of terminologies relevant to your niche product, and cater to geographical trends in language.

Making the developer portal human-readable with a functional description catered to non-developers is helpful as well. For example, take how Avalara describes their API in a succinct single sentence:

“Real-time tax calculation for financial applications”
Avalara's AvaTax developer portal

Keyword optimization in our realm means researching what developers are discussing and searching for. Finding search keyword performances as well as geographic nuances can be done using a tool like Google Adwords Keyword Tool. Knowing your keywords will also guide advertising on search engines, if you decide to go that route. 

When describing your API, consider increasing word count on API homepages to describe technical features, and leverage the nuances in technical jargon to help distinguish your service. For example, “Emotion Recognition API” may be a more applicable label than “Facial Detection.” 

But this is only the beginning — the copy used is actually part of a larger onboarding experience that should be catered to unique developer personas. Guillaume Cabane recommends optimizing the signup process using predictive analytics, dynamic content, user testimonials, offering a self-identification option for customer support, and more to tailor the developer experience and increase lead conversion. 

Other Strategies to Boost API Findability

  • Use interactive consoles for testing API behaviors: Finding that an API exists is only the beginning. The next step toward developer acquisition is testing API behavior with sample requests and viewing the responses.
  • Create Zap & IFTTT apps: Partnering with Zapier and/or IFTTT could open your API to usage on a massive scale. Especially relevant if you have push features (webhook, Pub/Sub, real-time/streaming) that could be used to trigger other events to help savvy end users spice up their workflows. 
  • Attend events: Don’t dismiss the power that hackathons, meetups, and speaking at events have to spread awareness.
  • Improve learnability and developer experience: A big part of marketing an API is making the front-end as human-friendly and discoverable as possible. As your developer center acts as the locus for API learnability, make discovering new knowledge as easy as possible. 
  • Follow the industry: Channels like API Developer Weekly, ProgrammableWeb, the Nordic APIs blog, GET PUT POST, the Zapier engineering blog, and API Evangelist will keep you updated on industry momentum that could be critical for sustaining a relevant front face that caters to developer needs. 
  • Utilize dev communities for feedback: Post your API to Hacker News, Product Hunt, Beta List, or other channels to gain real-time usage and feedback. Heavybit also provides great talks and resources on developer evangelism.
  • Contribute: Publish thought leadership content around your platform. Kairos does this very well by creating a publishing machine whose content supports their human analytics program. 
  • Separate microservices into well-bounded product front ends when applicable: If you run a metaservice with multiple APIs for separate services or varying endpoints, then it may help to segment the developer experiences for service to cater to specific niche interests. 

Discoverability is Part and Parcel of the Intention Economy

The first step toward promoting any API should be taking a step back to see if it is discoverable. For history immemorial, there has been a fine line between the product and its advertising… but as consumers become more informed this dichotomy is blurring, and the web sector is leading the charge. 

Developers are smart, and probably don’t need overt advertisements. Rather, SaaS services should cater to their consumer’s *intentions*, and increase the discoverability of their service to mimic their search behaviors. Focusing on improving discoverability rather than dumping funds into an advertising campaign is in line with the Intention Economy, a philosophy coined by Doc Searls that places more emphasis on the consumer opinions and intelligence in finding products.

With karma-based ranking systems in vogue on developer social channels (Hacker News, Reddit, etc.), and the growing complexity of search engine algorithms to privilege quality, evergreen content, the viability of the Intention Economy is becoming more realistic, and the hunt for APIs is an apt case study. 

To review, in mid-2017, making an API truly discoverable involves keyword performance analysis, profiling in all API directories to increase the spread, leveraging API discovery formats, and creating machine readable API definitions to document your service and to leverage helpful tooling. 

We have defined many ways to set your service up for discovery. Now it's up to you to tailor your experience to optimize visibility. Good luck!

Storing API Test Results to a Database with Eventn and Runscope

$
0
0

A common requirement for Runscope users is to save and analyze test results for alerting, building custom dashboards, and other analytical purposes. One way to do that is to pipe all your test results to an Amazon S3 bucket. But what if you want more flexibility in how you store the test results? Or if you want to filter and process the test result data before storing it? 

Eventn is an HTTP microservices platform specifically designed for data processing and analytical workloads. You can use it to launch microservices, and each one you create provides a Node.js JavaScript function runtime that executes when an HTTP request is made.

In this example, we will demonstrate how to call an Eventn microservice after each test completes and provide an example of saving the results to a database. We will also report on the data using SQL.

Eventn Overview

Each Eventn microservice provides two JavaScript functions: an HTTP POST function for data submission and an HTTP GET function for data reporting and analytics. These execute when triggered by their respective HTTP methods. The platform currently supports Node.js V7.6.0 which allows for all ES6 capabilities as well as newer features such as async/await and also supports a set of whitelisted npm modules.

It also provides database connectivity from each microservice, which allows users to easily save test results to a database of their choosing (support for Postgres, MySQL, or SQL Server at the time of writing), or alternatively, save results to an Eventn hosted MySQL instance (a free 250Mb allocation is provided).

With the full power of modern JavaScript in each microservice, it is simple to perform ETL and SQL reporting on stored test results.

Integrating Eventn Microservices and Runscope Webhooks

Connecting Runscope to an Eventn service is a breeze using a Runscope Webhook. We simply set up a webhook to call the Eventn POST function of a microservice on test completion. To start with we will look at saving the data to Eventn’s built-in MySQL store and then look at how to store the data to a remote database.

To get started, first we need to create a free account at https://eventn.com and create a new microservice by clicking on“Create Service” or the plus button:

Eventn's dashboard with the "Create Service" button highlighted.

Once created, take note of the service URL and authentication token from the “Manage” tab:

Eventn's dashboard, displaying the manage tab of a microservice and highlighting the URL and Token parameters.

Next, navigate to your Runscope account and select the “Connected Services” link from the icon in the top right-hand corner:

Showing the logged in Runscope interface, highlighting the menu dropdown on the top-right hand side and highlighting the "Connected Services" option.

And select “Webhooks” from the list of available integrations:

Note: The advanced webhook functionality within Runscope is not enabled by default, so contact Runscope support to get this enabled.

4-runscope-services-list.png

Next, we need to configure the webhook by entering the POST URL and authentication token from the Eventn service. To set the authentication token, an Authorization header must be set with the token prefixed with Bearer. For example: Bearer xyz. You can see an example in the following screenshot:

Runscope interface, showing the Advanced Webhooks configurations settings under the "Connected Services" menu.

Click “Save Changes”, navigate to one of your API tests, and edit the test environment settings. Select "Integrations" on the left-hand side menu and check the toggle to “on” next to our newly configured Webhook integration:

A Runscope API test environment settings, showing the Integrations menu selected on the left-hand side, and the Eventn microservice webhook toggled on.

The test is now configured to make a POST request to the Eventn microservice after each completion. By default, it's going to send all test result data.

Now, run the test and then navigate back to your Eventn account. Select the “Edit” tab, then the “GET” subtab to view the JavaScript function that executes when a GET request is made. By default, the  Eventn GET function simply counts the number of records within a table. Click the “GET” button as shown below to run the function and a count of 1 will be returned if the test data has been saved:

Showing an Eventn microservice dashboard, with the Edit tab selected, and the "GET" option selected. 

Note the Eventn store() interface returns a Knex.js instance, which provides a powerful SQL query builder. For example, to update the onGet() function to show all stored records we could replace:

return context.stores.default()
.table()
.count();

With:

return context.stores.default()
.table()
.select();

Or perhaps, we could just return the most recent record:

return context.stores.default()
.table().select()
.limit(1)
.orderBy('ts_created', 'desc');

Given we have the full power of SQL, it is trivial to create more complex reports that can be consumed by any other down-stream services e.g. for visualization or alerting. Here is another example showing a breakdown of failed vs. passed tests:

function onGet(context) {

    return context.stores.default()
            .raw(`SELECT ts_created as Date,
                  DATE_FORMAT(ts_created,"%d %b %y") as Date_Human,
                  sum(case when data-> "$.result" = "pass" then 1 else 0 end) as Pass,
                  sum(case when data-> "$.result" = "fail" then 1 else 0 end) as Fail,
                  COUNT(*) AS Total 
                  FROM SV_P3MNNL9LJ
                  group by DAY(ts_created)`);

}

module.exports = onGet;

This would return a result such as:

[
  {
    "Date": "2017-04-01T09:45:08.000Z",
    "Date_Human": "01 Apr 17",
    "Pass": 4,
    "Fail": 2,
    "Total": 6
  },
  {
    "Date": "2017-04-02T01:45:20.000Z",
    "Date_Human": "02 Apr 17",
    "Pass": 203,
    "Fail": 0,
    "Total": 203
  }
....
]

And we could use another tool, such as Tableau or Excel, to create a visualization chart:

A bar chart comparing the number of tests that passed and failed for every day between March 30th and April 2nd.

External Databases

The previous example showed how to utilize Eventn’s built-in MySQL storage however it is also possible to save the results to an external database of your choice. This is done by creating a store function. Store functions contain the database connection details and can be called from within a microservice. See the Eventn User Guide for details on creating a store function.

Once the store function is created, simply call it in the same way as the default() function and pass in a table name. In the following example, we created a store function called “mydb” and we are accessing a table named “results”:

return context.stores.mydb("results").count();

Tip: If you wish to test with a locally hosted database, ngrok provides a great way to expose a local database via a public address.

Note: It is best practice to create a dedicated DB user for such usage with restricted permissions e.g. just INSERT. Setting a restricted access port is also advised.

Conclusion

The above examples show how easy it is to save Runscope monitoring test results to a database. Eventn microservices make it easy to process the data and provide RESTful analytical services for other integrations.

If you need any help setting up your Runscope webhook integration, please reach out to us. If you need help with the Eventn platform, please reach out to Eventn's support.

Happy hacking!


If you're new to API monitoring and testing, learn more about Runscope's cloud-based API testing solution and sign up for your free trial account today.


Building a Steam Powered IoT API with Thingsboard

$
0
0

This is the fifth and final post in our Featured Guest Series! Joshua Curry shares his experience building an IoT project with a steam engine, Thingsboard, Raspberry Pis, and Runscope.

If you're interested in being a part of our next series, fill out this short form and we'll get in touch with you for our next run.

The project began late at night, as all good steam projects should. I had been thinking about some of the projects I’ve built with microcomputers like the Raspberry Pi and wanted to vault them out of the bland novelty of blinking and beeping boxes.

I have a model steam engine I inherited from a tinkering uncle and thought it would be a gas to try and make an IoT device out of it. I also have experience in API production and consumption, so I wanted it to be a relevant POC for devs trying to integrate the physical world with the virtual.

Bridging 300 years of technology seemed like an appropriate way of taking some of the hot air out of the IoT hype. It was also a chance to really explore modern API testing tools such as Runscope.

A photo of a Dampfmaschine D10 steam engine

The goal of the project was to take a machine without a natural digital context and enable remote monitoring and control through a standardized API. One of the challenges I wanted to take on was how to differentiate a hardware failure versus an API failure and then pass that response on for debugging by a dev down the line.

Despite planning to spend a few hours testing sensors, I found myself immersed in the world of IoT API provisioning for the next week.

Setting Up The Project

I started with two Raspberry Pis and set them up with fresh OS installs and sensor libraries. From a practical standpoint, being able to monitor and manipulate multiple devices is crucial.

After trying some of the heavy hitters like AWS IoT and IBM Watson IoT, I settled on a much smaller open source IoT project called Thingsboard. The enterprise players have a raft of features that are thick with dependencies on other services they offer. By choosing the smaller package, I was able to provision a simple droplet on DigitalOcean.

A screenshot of the thingsboard.io website main page

The base IoT capability of the devices is to send MQTT messages to an external broker. Sending MQTT back and forth is fairly trivial, but getting them into a state that’s readable by an API requires a platform. Thingsboard digests MQTT and offers distinct REST API endpoints for each device, as well as an administrative API that offers historical data queries.

Running on the Raspberry Pi, the following python snippet sets up a basic connection to the remote Thingsboard install I had started on my Digital Ocean droplet:

client = mqtt.Client()
client.username_pw_set(ACCESS_TOKEN)
client.connect(THINGSBOARD_HOST, 1883, 60)

[...]

sensor_data['temperature'] = round(int(temp))    # Boiler temperature
sensor_data['humidity'] = round(int(humidity))    # Steam detect
sensor_data['snd'] = snd                # RPM threshold
client.publish('v1/devices/me/telemetry', json.dumps(sensor_data), 1)

After installing Thingsboard and doing some basic configuration, the API was exposed. I created a free trial account on Runscope and was able to get some basic scheduled tests going within a few minutes. That’s when the API problems began, and I hadn’t even lit a fire yet.

API Testing and Debugging

The most common error was an authentication failure. So, I kept the tests running on my Runscope dashboard as I dug into the docs, trying different out different API configurations. It turns out that Thingsboard needed an extra setup string before the access token.

One Runscope feature I discovered was to have multiple API tests running simultaneously so that I could experiment up and down the URI hierarchy at the same time. For example, I had one test that just requested the name of the device and another that actually requested the latest telemetry post. The first indicated a pass/fail of the device API and the second included parameters that depended on the time period of the data. That was useful for testing parameter combinations for related but distinct resources.

As debugging progressed, the steady march of red error bars along my Runscope dashboard visualizer gave way to flashes of green. I was able to compare test results over time, with verbose request and response payloads from each time period. Success became steady, so I felt confident the API was behaving well enough for trials.

A screenshot of the Runscope dashboard, showing a list view of API test runs with green and red checkmarks

Running the Steam Engine

It was time to pull the Dampfmaschine D10 (by Wilesco) off the high shelf it had been parked on for months. The box smelled vaguely of kerosene and gear oil. I was glad to find that I still had some of the little fuel cubes they give you to light the boiler.

Another photo of the Dampfmaschine D10 steam engine, showing the two Raspberry PIs connected to it

I filled the boiler using the tiny funnel and pushed aside my Raspberry duo for space. Steam engines aren’t very complicated machines, so there wasn’t much to assemble. I lit one of the small fire cubes and slid in the drawer underneath the boiler. Then, I waited. It takes a while to build up enough pressure.

I took the temperature module from one Raspberry Pi and placed it near the boiler and then connected a microphone module near the primary cylinder. The idea was to measure boiler temperature for device state and then use the microphone to measure the cylinder action threshold, i.e. whether it's running or not.

Soon, a weak chirp built to a solid whistle as the release valve signaled peak pressure. The wheel began to turn in a lurching loop and was quickly spinning at full RPM.

My terminal window began to ripple with streams of MQTT messages from both Pis as the sensors spit out data. The Thingsboard telemetry window dutifully relayed the data, and I switched over to my Runscope tests to see the results.

A screenshot of the thingsboard.io dashboard, showing the Latest Telemetry tab with values for the humidity, sound, and temperature sensors

There were plenty of errors. Now disappointed, but curious, I noticed the headers I set up were different for some of the tests. Luckily, it was a noob mistake; I had used X- Authentication instead of X-Authorization as a header. After fixing it and restarting the schedule, I saw a succession of green check marks on my tests.

The API responses were meaty enough to begin creating Runscope assertions for various states. One for API up or down, another for boiler sensor data/no data/fail, another for the cylinder sound monitor timestamp, and the last one for telemetry update time.

That last one was the goal of the project. If the Raspberry Pis are on and functioning, they will send telemetry (MQTT) even if there is no new sensor data. If they have crashed or power is off, the telemetry will stop. I can set a query for when the last successful telemetry was sent and if it's not recent, trigger an alert with a failure message and a time when it failed.

Thus, Runscope can be set to email me if the API goes down, the steam engine has stopped, or Raspberry’s are not responding. All different cases and priorities.

Wrapping Up

Runscope was a really useful tool to have in this project. IoT has both software and hardware challenges. It took care of the software monitoring while I focussed on the hardware, to make sure it didn’t explode in my face.

IoT is about much more than light bulbs and fitness trackers. The manufacturing industry is in the process of updating and integrating vast amounts of legacy (and proprietary) monitoring applications. Yielding modern, standards compliant APIs for machine monitoring and control is the task of armies of engineers right now.

This model steam engine is pretty low tech, but the project illustrates a viable approach to modernizing legacy machinery. It was also fun to do and made use of a diverse set of modern tech tools that aren’t always seen as related.


Monitoring API Performance with the New API Metrics Endpoint

$
0
0

We have a new endpoint available for the Runscope API! Now you can retrieve your API tests performance metrics for each individual test, keep a pulse on your API's performance over time, and create custom internal or external dashboards with it.

The endpoint returns the same information that you can find in the Overview tab of your API test, under Test Performance:

1-dashboard-test-performance.png

Using the API Metrics endpoint

You can use the new endpoint by making a request to:

https://api.runscope.com/<bucket_key>/tests/<test_id>/metrics

And you can filter the request by using 3 different parameters:

  • region - The service region you're using to run your tests (e.g. us1, us2, eu1, etc.)
  • timeframe - Hour, day, week, or month. Depending on the timeframe you use, the interval between the response times will be different.
  • environment_uuid - Filter by a specific environment, such as test, production, etc.

The endpoint will return the same information you can find in the dashboard. An example response will look like this:

{
    "response_times":[
        {
            "success_ratio":0.3333333333333333,
            "timestamp":1494964800,
            "avg_response_time_ms":44
        },
        {
            "success_ratio":0.5517241379310345,
            "timestamp":1494968400,
            "avg_response_time_ms":39
        },
         
        
         
        {
            "success_ratio":1.0,
            "timestamp":1495044000,
            "avg_response_time_ms":46
        },
        {
            "success_ratio":0.9965397923875432,
            "timestamp":1495047600,
            "avg_response_time_ms":45
        }
    ],
    "change_from_last_period":{
        "response_time_50th_percentile":0.05284147557328004,
        "response_time_99th_percentile":-0.03965577026132455,
        "total_test_runs":-0.0313588850174216,
        "response_time_95th_percentile":0.2567257314416911
    },
    "environment_uuid":"all",
    "region":"us1",
    "timeframe":"day",
    "this_time_period":{
        "response_time_50th_percentile":44.0,
        "response_time_99th_percentile":101.54999999999998,
        "total_test_runs":296,
        "response_time_95th_percentile":88.94999999999997
    }
}

Keep in mind that each element inside response_times can represent multiple test runs, depending on the time interval you use when calling the metrics API.

You can store that information in an external service if you want to keep track of performance for a period longer than 30 days, or if you want to build your own custom internal or external dashboard.

For more information about the endpoint, you can check out our API reference docs here. If you have any questions, please reach out to our awesome support team.


If you're new to API monitoring and testing, learn more about Runscope's cloud-based API testing solution and sign up for your free trial account today.


3 Easy Steps to Cloud Operational Excellence

$
0
0

This is a guest post by Bruce Wang and Glen Semino from SYNQ, a video API built for developers. In this post, they explain the tools and processes they use to keep the company's API operations running smoothly, and share a real-world story of how they found an API bug before launching a new feature.

There are a lot of tools out there, and sometimes its hard to sift through them all. Here’s a simple guide to combine 3 tools, Runscope, PagerDuty and StatusPage to create a powerful cloud operational workflow that will give you peace of mind and clear visibility to your application for your customers and internal teams alike!

In case you’re not familiar with the tools, here’s a quick rundown:

  • Runscope — highly flexible API testing and monitoring service
  • PagerDuty — incident management system
  • StatusPage — Customer-facing API health status page

The workflow

It’s important to implement tools for specific purposes, and we wanted to integrate these 3 tools to help manage our operational process better. In the following examples, we’re going to show you how we added a new feature to our product (Live Streaming APIs), and added operational visibility to it. First let’s walk you through our workflow:

  • A Runscope test monitors the service on a schedule and sometimes tests from different geo locations depending on how it has been configured
  • If a Runscope test fails, PagerDuty creates an incident, alerts our Slack channel, and alerts the appropriate engineers
  • PagerDuty also updates the service status on StatusPage to alert our customers the service is having problems
  • Once the problem is resolved and the Runscope test that was failing starts to pass, the incident on PagerDuty will resolve itself and the service on StatusPage will revert back to operational status automatically

Step 1: Create a Runscope test for the Live Streaming API

Runscope provides an easy way to make POST requests on an API and then make assertions on the response.

Here is what our POST request to our live streaming service looks like in Runscope:

Detail view of a Runscope API request test step, showing a POST request to {{url}}/v1/video/stream, and two query parameters of api_key and video_id set to their respective variables

Note: the {{xxx}} is a variable that can be set from previous tests or configured via “environment” specific settings. You may hard code values in the beginning, but using variables is invaluable for creating richer tests across your various service environments

When our live stream API is called, the JSON response we expect should include a playback and stream url, so we just need to add some simple assertions in Runscope:

Assertions tab view of a Runscope request step. It has three assertions, one for the status code equals to 200, and that the JSON body property of playback_url and stream_url are not empty

We check that the HTTP response is 200 and then we check that playback_url and stream_url are not empty. We also save the values that are in playback_url and stream_url:

Variables tab view of a Runscope request step, saving the JSON body contents of the properties stream_url and playback_url to variables for subsequent use

The reason for saving the values is that we will then call our video details API and assert that the values stream_url and playback_url are present:

Runscope request step detail view of a POST request to {{url}}/v1/video/details, with the query parameters of api_key and video_id

We then make the assertion on the details API that the playback_url and stream_url are the values we expect:

Assertions tab view of a Runscope request step showing three assertions: status code equals 200, and the JSON Body properties of stream_info.playback_url and stream_info.stream_url are equal to the saved values from the previous request of playback_url and stream_url

After we built this test, we put it on a schedule using the ‘Schedules’ menu in Runscope and we were ready to add a PagerDuty alert so that we could be notified if the test for the live streaming API fails.

Step 2: Setting up PagerDuty with Runscope

Luckily, Runscope and PagerDuty have a pre-built integration. So all we had to do was go to PagerDuty and create a new service under the ‘Configuration’ menu. When adding the service for ‘Integration Type’ we specified ‘Runscope’:

PagerDuty's Services > Add Service screen showing the integration settings for Runscope

Then we configured the ‘Incident Settings’ and ‘Incident Behavior’ and then simply clicked ‘Add Service’ . Once the service was added, we were able to see it under our ‘Services’ in PagerDuty:

PagerDuty's Services tab showing a Runscope integration row setup

To then connect to our live stream test in Runscope to PagerDuty, we went into Runscope under ‘Connected Services’ and clicked the button that said ‘Connect PagerDuty’:

Runscope's Integration page showing the PagerDuty integration highlighted

Then the Runscope system asked us to authorize our PagerDuty account with Runscope, so we put in our PagerDuty credentials and clicked ‘Authorize Integration’. Finally we choose the service from PagerDuty that we want to integrate with Runscope and clicked ‘Finish Integration’:

Runscope's and PagerDuty integration page setup, showing the selected service from the PagerDuty's account

Once we did that, inside of ‘Connected Services’ in Runscope we could see our PagerDuty integration:

The Connected Services tab inside the user's Runscope account displaying a list of existing services, including Ghost Inspector and the PagerDuty integration setup in the previous steps

As you can see from screenshot our PagerDuty service called ‘SYNQ Live Stream Check’ is now integrated into Runscope. The last step was connecting the PagerDuty service to our Runscope test for the live streaming service. To do that we simply went to the live stream Runscope test and went into the ‘Editor’, we then modified the integrations for the environment we are using. Then we just flipped the integration to ‘ON’:

Runscope's Environment Settings for an API test of SYNQ's Live Stream feature, showing the Integrations tab with the newly created PagerDuty integration set to "on"

Note: Again, this notification is available in a per environment setting, as you can see this environment is “Production”

We now had the live stream test from Runscope connected to PagerDuty. Thus we would get alerted by text message or phone call if the Runscope test fails. In addition to that, we connected PagerDuty to our Slack channel following this guide, so that if a PagerDuty incident is triggered by Runscope, we get alerted on our Slack channel. The last piece left was to connect PagerDuty to StatusPage, so that our clients could be alerted if the live streaming service fails.

Step 3: Adding the Live Streaming Service to StatusPage

Now that we have a way to monitor and alert our live streaming service, we need to expose this to our clients. We do this with our public facing StatusPage (having a transparent operational status is very important and you can read more about that here.)

To connect PagerDuty and StatusPage, we followed this PagerDuty guide. Once we had both of the accounts connected, the rest of the setup occurred on StatusPage. Inside of our StatusPage configuration, we now had a section for PagerDuty. Inside that section, to connect a component to a PagerDuty service, we needed to add a rule:

StatusPages's Inactive Services tab, showing a list of connected PagerDuty options and the "Add Rules" link on the right-hand side

Under the `SYNQ Live Stream Check’, we clicked ‘Add Rules’ and then that brought us to another page, where we were able to connect the ‘Live Stream’ component on our StatusPage to the PagerDuty ‘SYNQ Live Stream Check’ service:

The "Add Rules" detail page of the PagerDuty integration, showing the settings connecting the Live Stream feature on PagerDuty to the setting that StatusPage should show in case of an incident, and also template rules to be set

We clicked on ‘Save Rules’ and we were done. On StatusPage under ‘PagerDuty Setup’ and ‘Active Services’ we could now see our ‘SYNQ Live Stream Check’ present:

StatusPage's Active Services tab showing the newly created PagerDuty integration on the list

Now our public facing StatusPage shows our ‘Live Stream’ status!

SYNQ's StatusPage public page displaying all systems as Operational

If our live stream service test failed on Runscope the ‘Live Stream’ component on our status page goes from ‘Operational’ to ‘Degraded’.

Mayday! Mayday! We have a Problem

Although our live stream service was still in alpha, we had no issues and our Runscope test for the service were all green. Then one day, we get a text message from PagerDuty, alerting us that our Runscope test for our live stream service was failing. In the meantime we were also alerted on Slack and our ‘Live Stream’ component on StatusPage went from ‘Operational’ to ‘Degraded’.

Next, we immediately went into our live stream Runscope test and noticed that we were not getting the appropriate HTTP response code from our live stream API. We knew at this point that our live stream service was having an actual failure. We then checked the server logs for our streaming servers in Amazon Cloudwatch and we noticed that it was not taking any requests for creating new streams. We eventually traced this to a backend service we depended on that that ran out of resources.

There were two issues we discovered. One, we were not deleting old and unused streams, which resulted in excessive streams and running out of resources. The second issue was that our Runscope tests were running too often, thus exacerbating the issue by creating 288 unused streams a day. We learned that in some cases running a Runscope test too often is not ideal and that building a test and monitoring model around new features can help you find bugs in your platform.

Conclusion

Thanks for sticking with us for the whole article. Hopefully you got a lot of value in it, and feel free to ask us any questions you may have about our process or any individual services we use in the comments below. Happy Service Building!

Tutorial: Continuous Integration with CircleCI and Runscope API Tests

$
0
0

CircleCI is a popular Continuous Integration and Delivery solution (used by Facebook, Kickstarter, Spotify), and integrating it with Runscope Trigger URL tests so you can run your Runscope API tests takes only a few minutes!
 
Runscope can be used by itself to monitor and test APIs, but it can also be used as part of your CI workflow. We have a sample Python script in our GitHub that can be used to trigger a set of tests in your Runscope account, and change the build status based on their results.
 
In this tutorial, we're going to show you how to use that script with CircleCI. We'll cover how to:

  1. Generate a Runscope API access token
  2. Get your tests Trigger URL
  3. Set up your environment variables in CircleCI
  4. Run the Python script as part of your test commands

Let's get started!

Setting up the Runscope Python script

You can find the sample script in our GitHub:

The two files we're interested are requirements.txt and app.py. For this tutorial, I'm just going to work with the raw links from our GitHub repository.
 
If you're integrating this into your project, I highly recommend either forking it to your own repository, or adding these files to a separate folder. That way, you can prevent your build from breaking in case there's an update to the repository.

Getting your Runscope variables

Trigger URL

When running this script from the command line you can pass one parameter to it, which is a Runscope Trigger URL. If you want to run a single test, you can find its trigger URL under your environment settings:

Runscope test environment settings with the Trigger URL option selected

If you want to run all the tests in a bucket, you can find a separate trigger URL in your bucket settings:

Runscope dashboard highlighting the Bucket Settings link Runscope bucket settings page, highlighting Trigger URL section

Generating Your Runscope API Key

We need a Runscope personal access token to interact with the Runscope API and retrieve the results from our test run.
 
To get your access token, head over to your account's application tab, and click on Create Application. In the next screen, give your application a name, website URL, and callback URL. You can use dummy URLs if you're just using this app for your CI integration (e.g. http://example.org):

Runscope applications page displaying the Create Application page with dummy values

Click on Create Application to finish the process. Then, scroll down to the bottom of your new application page and copy the personal access token value. We're going to use that in our next step:

Runscope application page detail, with the Personal Access Token highlighted

Integrating with CircleCI

In your CircleCI account, select the Build tab on the left-hand side menu, and click on the gear icon next to the project you want to integrate with Runscope to open its settings:

CircleCI Builds menu, highlighting the gear icon next to a project's under the project list

On the left-hand side Settings menu, click on Environment Variables under Build Settings, then click on Add Variable. We only need to add one environment variable here named RUNSCOPE_ACCESS_TOKEN. Paste the access token that you copied in our previous step under Value, and click on Add Variable:

CircleCI project settings page, showing the Environment Variables page with the Runscope access token variable set

Now, let's go to Dependency Commands under the Test Commands menu.

The CircleCI environment already comes with python and pip pre-installed. The first thing we need to do is make sure the necessary packages for the script are installed. Add the following command to your Pre-dependency commands window:

pip install -r https://raw.githubusercontent.com/Runscope/python-trigger-sample/master/requirements.txt

Note: Remember to change the requirements URL above to your fork or local file.

Next, I'm going to add another command just below it to download our app.py file (you can skip this step if you copied the file to your project):

wget https://raw.githubusercontent.com/Runscope/python-trigger-sample/master/app.py
CircleCI project settings, showing the Dependency Commands page with the Pre-dependency commands window filled with the Runscope commands

For the final step, let's head to Test Commands under the Test Commands menu. In the Post-test commands window, we can run our app.py script. It takes one parameter, which is the Trigger URL you copied at the beginning of this tutorial. So we can just run the command as:

python app.py https://api.runscope.com/radar/ba6e5157-29bc-4dae-96aa-221ffc559361/trigger?runscope_environment=84fcfb03-cfe4-412e-b460-1bca75b0aefa

Note: Remember to use the correct directory for app.py if you copied it to a folder inside your project.

CircleCI project settings, showing the Test Commands page with the Post-test commands text window including the command to run the Runscope python script

Continuous Integration Complete

In your next build runs, you should be able to see an extra step running the Python script, and hopefully returning a green checkmark ✅:

CircleCI build detail page, showing the successful output of the Runscope Python script where all the API test runs have passed

With Runscope integrated into your CI process, we hope that you have even more confidence in your builds and that your APIs will be 200 OK.

We used CircleCI in this tutorial, but these instructions should also apply to other CI providers as well (check our Codeship tutorial and our Jenkins plugin). If you need any help with those, please reach out to our awesome support team.

Are you integrating Runscope in your build process? If so, we'd love to hear how you're doing it, just reach out to us via email or on Twitter.

Making Requests to the AWS API with Signature Version 4 and Script Libraries

$
0
0

We recently had a customer that wanted to test and monitor a few endpoints for the AWS API. For security reasons, most requests to AWS APIs have to be signed using their Signature Version 4 signing process. There are several SDKs and libraries that can help with that signing process, but we wanted something that we could integrate with our script engine with plain JavaScript.
 
One of the most powerful features that Runscope has is the ability to add pre-request or post-response scripts to your API tests, so you can programmatically change your requests based on your API requirements. For example, you can add custom headers before your API request is made, or remove sensitive information from your API responses before they are stored and shared for viewing.
 
So, we put together an aws4.js library based off of mhart/aws4 Node.js library, that you can add to your Runscope account and then use it in your pre-request scripts to sign your AWS requests!
 
You can find the library in our GitHub repository for Runscope/script-libraries, which includes the JavaScript file and detailed instructions on how to use it. Basically, it involves 3 steps:

  1. Adding aws4.js as a custom library to your Runscope account
  2. Activating library in your test's environment settings
  3. Editing the test step pre-request script

If you have any questions or suggestions about the library, please help us make it better by opening an issue or PR on GitHub! And big thanks to mhart for the original Node.js library!

Copying Runscope Environments using the Runscope API

$
0
0

The Runscope UI provides a lot of flexibility for our customers to monitor and test APIs. You can easily manage the tests you have written, whether that means exporting them, duplicating them, or moving them to a different bucket.

Sometimes, however, our customers want to do something that we don’t have built-in. This is where the benefits of having a robust API come into play. For example, one customer recently expressed interest in being able to export an environment for backup purposes; another customer wanted to copy a shared environment to a different bucket.

Runscope UI, showing an API test editor page with the environment section expanded. Includes two commonly used initial variables, baseUrl and apiKey.

Environments are one of the key elements in creating reusable API tests, especially when it comes to testing local, staging, and production APIs. This blog post outlines two example Python scripts to address these needs of importing/exporting environments. You can find these scripts at Runscope/runscope-api-examples. Feel free to fork/modify them to fit your needs.

Getting Started with Runscope API

The first step in working with the Runscope API is to get a Runscope API Token. The fastest way to do this is to go to the Applications page in your account and select “Create Application”. Since you are creating it for personal use (as opposed to creating an application for third- party use), you can fill in any URL for the website URL and callback URL. Once you have filled in the form, click Create Application. You can grab your Personal Access Token from the bottom of the following page to use in the scripts:

Runscope Application detail page, showing the personal access token a user can find at the bottom of the page, to quickly make requests to the Runscope API.

Configuring the Scripts to Run

Next, we have to get the scripts that will do the heavy lifting for us. You can download the ZIP from the GitHub repository, or do a git clone for:

git clone https://github.com/Runscope/runscope-api-examples.git

The first script -- runscope_export_env.py -- allows you to export a Runscope Environment to a JSON file. The second script -- runscope_import_env.py -- allows you to import a JSON file exported by the first script.

To run the Python scripts, there is a config file (runscope_config.py) that drives both of the scripts:

master_bucket_key = 'BUCKET_KEY_OF_SHARED_ENVIRONMENT_TO_EXPORT'
master_env_id = 'ENVIRONMENT_ID_TO_EXPORT'
runscope_token = 'RUNSCOPE_PERSONAL_ACCESS_TOKEN'
runscope_dest_bucket = 'BUCKET_KEY_TO_WRITE_TO'

This configuration file will define the bucket you want to take the shared environment from, the shared environment you want to copy, the bucket you want to write the environment to, and your Runscope API token, as found above. Fill in the appropriate values and save this file before running the scripts.

Exporting a Runscope Shared Environment

The first script -- runscope_export_env.py -- allows you to export a Runscope Environment to a JSON file. This script leverages the environments endpoint, and it can be run with:

python runscope_export_env.py

This will use the Runscope API token you have specified to authenticate and retrieve the selected environment from the selected bucket using the endpoint:

GET /buckets/<bucket_key>/environments/<environment_id>

And then, it will save the results to a JSON file in the same directory as your script, with the same name as your environment.

You can save this file for future use, modify it if you want to change it before importing to a different bucket, or simply move on to the next step to import it to another bucket.

Importing a Runscope Shared Environment

To import the environment you have exported back to another bucket (specified in the config file), you can just run the second script with:

python runscope_import_env.py 'Your Environment Name.json'

Which will use the endpoint:

POST /buckets/<bucket_key>/environments

That will create a new shared environment in the bucket specified based on the JSON file previously saved. This environment will then be available to tests in this bucket.

Advanced Usage

In addition to copying shared environments between buckets, these scripts can also be used to work with test-specific environments or to overwrite a shared environment. To access these advanced features, check out the README for the command line flags you can use to override the defaults.

With the power of the Runscope API, we make it easy to extend our functionality on your own. If you have any suggestions on how we could improve those scripts, or other things you might want to do that we currently don't have built-in, feel free to open a PR in our api-examples repository, or reach out to our awesome support team!

Tutorial: Integrating Runscope with New Relic Insights

$
0
0
runscope-newrelic-insights-logo.png

New Relic Insights is a real-time analytics platform that collects event and metric data from other New Relic products, and also third-party integrations. By connecting New Relic Insights with Runscope API monitoring, you can collect metrics from your API tests and transform them into actionable insights about your applications.

Connecting Your Accounts

We need two get two variables from New Relic to integrate it with Runscope: an Account ID, and Insert Key.

Getting a New Relic Account ID and Insert Key

Log in to your New Relic Insights account, and click on Manage Data on the left-hand side menu:

New Relic Insights highlighting Manage Data option on left-hand side menu

Then on the top bar, click on API keys:

New Relic Insights highlighting API Keys option on top bar

Next, click on the "+" button next to Insert Keys:

New Relic Insights with an arrow pointing to the + button next to Insert Keys

In the next page you will find an Account ID, and Key. Copy these two values and we're going to use it in the next steps:

New Relic Insights highlighting the account id and key variables in the new insert key creation page

Activating New Relic Insights in Runscope

In Runscope, click on your profile on the top-right, and select Connected Services:

Runscope account highlighting the Connected Services option on the dropdown after clicking on the user's profile on the top right

Find the New Relic Insights logo and click on Connect New Relic Insights:

Runscope connected services page, highlighting the New Relic Insights integration and the button Connect New Relic Insights

Here you have to provide two items from your New Relic account: the Account ID, and Insert Key. Paste the values that we just got in the previous step:

Runscope New Relic Integration page, showing the two textboxes where the user has to add their New Relic Insights account id and key from the previous steps

For the last step, make sure to enable the integration for any API test that you have that you wish to send the information back to New Relic Insights. You can toggle the integration on/off by opening an API Test, expanding the Environment Settings, and clicking on the Integrations tab on the left-hand menu:

Runscope API test, showing an expanded environment settings with the Integrations tab selected, and the newly connected New Relic Insights integration toggled on

And you're all set!

NRQL Queries and Event Reference

For more information about the types of events and properties that are sent to New Relic Insights, and a few example queries you can use with your data, check out our full docs here.

And if you need any help with your integration, please reach out to our awesome support team!


If you're new to API monitoring and testing, learn more about Runscope's cloud-based API testing solution and sign up for your free trial account today.


Debugging SSL Errors in Your API Monitoring

$
0
0
A rounded rectangle with three circles on the top-left corner representing a simplified browser window, with a lock-pad symbol  and an empty text box in the center of it.

When working with external or internal APIs, a million things can make it return errors: missing HTTP headers, sending an invalid authorization credential because you forgot to append "Bearer" to your API token, or even just a simple typo on the endpoint you're trying to use.

Those issues can be frustrating, and sometimes take hours of banging your head against the keyboard or reading documentation to figure out what's wrong, but the solutions are easy. Another issue that is generally much harder to fix is debugging an SSL error.

What is SSL/TLS? (the short version)

SSL, or Secure Sockets Layer, is a security standard technology to help establish an encrypted connection between two systems: a server and a client (browser), or a server and another server, for example.

After SSL had been updated to 3.0, instead of increasing the next version number to 3.1 or 4.0, it was instead changed to TLS 1.0 (why?). Much like the OpenAPI Specification and Swagger nowadays, SSL and TLS are still used interchangeably when talking about the protocol or certificates.

How to Debug SSL Errors

Debugging SSL errors can be challenging for a few reasons. Error messages returned because of SSL issues are not always specific, or don't give indications on how to fix the problem. Another reason is the way different browsers, language stacks, or tools handle SSL. For example, you might be testing an API using cURL at first and getting a successful response from an endpoint, only to find the same request doesn't work from your Node.js application.

So, how can we start debugging these SSL errors?

Common Errors

Two of the most common issues we have seen out in the wild and helped our customers with are actually really easy to fix: invalid certificates, and incomplete certificate chains.

Even though they are two different errors, depending on the tool or stack you're using you might see the same error message. For example, this is what you would see in Runscope:

Error contacting host SSL: certificate signed by unknown authority

That is also the same message we return in case the server has a self-signed certificate, which is a common use case during development.

For self-signed certificates, we recommend simply turning off SSL validation temporarily in whichever tool you're using. In Runscope, you can disable SSL verification in your test by going to the test's environment -> Behaviors -> Validate SSL, or for all tests in your bucket by going to Bucket Settings -> Traffic Inspector -> Verify SSL Certificates.

CA Certificate List

When running into this error, the first thing we need to do is make sure the tool that we're using supports the authority that created our SSL certificate. At Runscope, for example, we use the Mozilla CA Included Certificate List. You can usually find what the tool you're using supports with a quick Google search or by reaching out to their support team.

SSLLabs SSL Server Test

If the server's SSL certificate provider is part of the list, then the next step we can take to debug the issue is to use SSLLabs SSL Server Test tool. This is a free tool that will perform a deep analysis of any public SSL web server you point to.

SSL Report summary for the runscope.com URL, showing just the top part of the Summary box with the Overall Rating of A, and a sideways bar graph for 4 attributes.

SSLLabs results will give the hostname a grade ranging from A+ to F, or M and T (more info about the ratings). Scrolling down the report, we can see more information about the certificates, and warnings in case there any issues found. For example, if a certificate is not trusted you'll be able to see it next to the Trusted line:

A cropped image of the Certificate #1 section of an SSLLabs report. It highlights the Issuer line, and the Trusted line, showing its value as "No   NOT Trusted".

And, you can also check the Issuer to make sure that is included in the certificate list your server or browser is using.

Incomplete Certificate Chain

The most common issue that SSLLabs uncovers and we have seen across our customers is the "Incomplete" error in the Chain Issues line:

A cropped image of the Additional Certificates part of an SSLLabs report. It highlights the Chain Issues line, with the value "Incomplete".

SSL certificate chains can be handled differently across different browsers and tools. If a browser or a tool is trying to connect to a server, it will try to check if its certificate was issued by a trusted CA. If it's not, it will try to see if that intermediate certificate was instead issued by a trusted CA, and so on. Those intermediate certificates can be bundled into one file, or they might be links that the browser or tool has to follow. And some browsers or tools won't follow those links, causing this SSL error to pop up.

Showing the SSL certificate for *.runscope.com, and highlighting the Intermediate certificate authority COMODO RSA Domain Validation Secure Server.

To fix this issue, if you have control of the server, you can just bundle all intermediate certificates into a single file and update your server with it. You can usually find instructions on how to do that with a quick Google search of "<your certificate provider> bundle certificates".

Other Resources

These are two of the most common error, and fixes, for SSL/TLS that we see. If you have any other issues you've run into, please let us know and we would love to add them here!

You can also check out this more thorough guide on "SSL/TLS - Typical problems and how to debug them" for help in debugging other issues.


If you're new to API monitoring and testing, learn more about Runscope's cloud-based API testing solution and sign up for your free trial account today.



Tutorial - Converting your Swagger 2.0 API Definition to OpenAPI 3.0

$
0
0
  Open API Initiative logo with the text "2.0 -> 3.0" underneath it  

At the end of July, the OpenAPI Specification 3.0.0 was finally released by the Open API Initiative. It's a major release, and after 3 years in the making, it brings about a lot of improvements over the 2.0 specification, making it possible to create definitions for a broader range of APIs.

What's New in OpenAPI 3.0.0

There're a lot of new features that were added to this version, such as:

  • Added support for multiple root URLs.
  • Added support for content type negotiation.
  • Extended JSON Schema support to include `oneOf`, `anyOf` and `not` support.
  • Added a callback mechanism to describe Webhooks.

Those are just a few of the changes in the new specification. For more information on what's new, I highly recommend checking out:

Converting Your API Definition

We can convert our v2.0 API definition with the swagger2openapi open-source project made by Mermade Software. You can find a hosted version of the web app here:

After opening the URL above, just paste or upload your API's v2.0 definition, make sure that the Validate checkbox is unchecked, and click on Convert Swagger/OpenAPI 2.0:

Then openapi-converter.herokuapp.com website, with the Convert tab selected, and the first few lines in JSON of the Runscope v2.0 API schema in the "Paste Swagger 2.0 here:" textbox, and the "Validate" checkbox unchecked.

If you have a valid definition, the app should return the converted v3.0.0 version for it:

The result page after hitting "Convert Swagger/OpenAPI 2.0" button in the converter app, showing the first few JSON lines of the Runscope API schema, now in v3.

You can check out our Runscope API v2 definition, and the resulting converted v3 file on GitHub.

Should I update my specification?

That depends. It'll be a while until all the tools that supported v2.0 also support the new v3.0.0, so it's really going to vary on how you're using the specification. But, if you couldn't describe your API before because of features that are now available, now is a good time to start building your API definition. 😄

You can find a list of tools that support v3.0.0 in the OAI-spec GitHub repository, and you can also check out our Swagger/OpenAPI resource guide.


Do you have an OpenAPI 2.0 definition for an API you'd like to start monitoring? Learn more about how you can import it into Runscope and sign up for your free trial account today.


Playing with Node.js and the Runscope API on Glitch

$
0
0
 logo-day 

I've been wanting to create a project on Glitch for quite some time. Glitch is a startup/product/friendly community where you can create and remix Node.js projects, use an online code editor to personalize them, and you don't have to worry about hosting or deployment. And it's free! It's a really great way to start a project and prototype an idea, without having to worry about those million little things that can get in the way of your dream app.

The Glitch code editor showing the runscope-oauth project with the beginning of the server.js JavaScript file

I thought it'd be fun to share a few projects I made using the Runscope API, and how you can use them to extend Runscope functionality or create custom features for use cases you might have, like creating a custom dashboard that displays API metrics using C3.js.

Here we'll take a look at three projects:

  • runscope-oauth - Uses Passport.js + passport-oauth2 + the Runscope API authentication.
  • runscope-batch-edit - Remix of runscope-oauth, uses the Runscope API to get a list of user buckets + list of tests in a bucket + set multiple tests schedules + set multiple tests default environments.
  • runscope-api-metric - Remix of runscope-oauth, uses the Runscope API to get a test's metrics information (avg. response time, success ratio, etc.) + express-healthcheck Node.js package to display app's uptime information.

Clicking on the project's link will take you to the Glitch homepage, and show you a popup with the README for the project, and also three buttons to: preview the app live, view the source code, or "Remix your own" to make a copy of the project under your account so you can play with it.

The Glitch website, showing the runscope-oauth project's detail page, with three buttons (Preview, View Source, and Remix your own) under the project's name, and the beginning of its README

Let's take a closer look at them!

Authenticating with the Runscope API OAuth 2.0

Our first project is actually a remix of another project (github-oauth). You can give it a try here:

And you can view the source code directly here:

The goal of our first app is to ask the user to authenticate with their Runscope credentials, and show a success screen if it works. We do this by using Passport, implementing the passport-oauth2 strategy, and setting up the code to use the authorization, token, and callback URLs found in our API Authentication docs. In our server.js file:

passport.use(new OAuth2Strategy({
    authorizationURL:   'https://www.runscope.com/signin/oauth/authorize',
    tokenURL:           'https://www.runscope.com/signin/oauth/access_token',
    callbackURL:        'https://'+process.env.PROJECT_DOMAIN+'.glitch.me/auth/runscope/callback',
    clientID:           process.env.RUNSCOPE_CLIENT_ID,
    clientSecret:       process.env.RUNSCOPE_CLIENT_SECRET
  },
  function(accessToken, refreshToken, profile, cb) {
    // TODO: retrieve user profile from Runscope API
  }
));

Our project also includes an example of using the `request` package to grab the authenticated user's account information and log it to the console, but we don't display it on the success page:

...
  },
  function(accessToken, refreshToken, profile, cb) {
    // retrieve user profile from Runscope API
    request({
      url: 'https://api.runscope.com/account',
      auth: {
        'bearer': accessToken
      }
    }, function(err, res) {
      if(err) {
        console.log('error getting account details: ' + err);
        cb(err, null)
      } else {
        var body = JSON.parse(res.body);
        console.log(body);
        return cb(null, body.data);
      }
    });
  }
));
Glitch editor running the runscope-oauth application, and showing where the user can find the Logs button to open the Activity Log menu for the app, and the console.log statement that prints the Runscope user information after a successful authentication

This is our starting point in creating an application that uses the Runscope API, and other users will be able to use with their Runscope credentials.

Batch Editing Schedules and Default Environment

Our second project is a remix of our first project, and builds on its initial functionality. You can give it a spin here:

It asks the user to authenticate with its Runscope account login, and then displays a list of the user's buckets:

Displaying the list of buckets in a Runscope account, after successful authentication of the runscope-batch-edit Glitch app

We do this by using the request package, and the Buckets List endpoint:

app.get('/getBuckets', requireLogin,
  function(req, res) {
    request({
      url: 'https://api.runscope.com/buckets',
      auth: {
        'bearer': authToken
      }
    }, function(err, response) {
      if(err) {
        console.log('error getting account details: ' + err);
        res.send("An error occurred");
      } else {
        var body = JSON.parse(response.body);
        buckets = body.data;
        res.send([username, buckets]);
      }
    });
  }
);

There are two links next to each bucket that will take you to a new page: Set Schedule, and Set Environment. Clicking on them will take you to a new page where you'll be able to see a list of all the tests in your bucket, and either:

  • Set the schedule for multiple tests to be monitored every 1min/5min/15min/1h/1d
  • Set the default environment for multiple tests to a shared environment, or the environment used in its last test run
The Glitch runscope-batch-edit app, showing the Set Schedule screen for a bucket. There's a radio button form the user can select the available schedule options for their tests: 1/5/15/30min, 1/6h, and 1 day. There's another checkbox form below that showing the list of tests in the bucket, and a submit form to make the API call that will update the tests. The Glitch runscope-batch-edit app, showing the Set Environment screen for a bucket. There's a radio button form at the top where the user can select any shared environments for that bucket. Below there's another checkbox form, where the user can select which tests to modify, and a submit form button to make the API call that will update the tests

The ability to batch edit tests schedule and default environments is something that currently can't be done via our UI, but we can make it work via our API with a few lines of code. :)

Getting API Metrics Information and Using C3.js

Our last project started out as a remix of our first project, and displays metrics information from a Runscope API test such as total test runs, average response time, and success ratio. We can test out the project by going to:

It retrieves the information from our new Metrics API endpoint. We also draw a timeseries chart using C3.js:

The Glitch app runscope-api-metric, displaying the uptime information for the app, as well as metrics from the Runscope API Metrics endpoint: total number of test runs, region, timeframe, and response time 95th percentile. It also shows a line timeseries chart, with two lines representing the success ratio and average response time in ms for an API monitor

This project is a little more simple. After you authenticate with your Runscope account, there's a form at the top of the page where you can put a Runscope bucket key and test id:

The Glitch runscope-api-metric app, showing the screen after a successful authentication with a user's Runscope credentials. It contains a simple form with two text fields: one for the bucket key with the placeholder text your_bucket_key, and one for the test id with the placeholder text your_test_id. There's a submit form button below the two fields, that will make the API call to the Runscope API Metrics endpoint with the information in the text fields.

You can find a test's bucket key and id by looking at the URL when you open a test in Runscope:

The Runscope website, highlighting the URL for a Runscope API test. It shows the URL: https://www.runscope.com/radar/o8mclwfrxtz0/7650e8d1-19af-4669-bb0e-3fcfef9a7335/overview, and there's two pink boxes highlighting the Bucket Key (o8mclwfrxtz0), and the Test ID (7650e8d1-19af-4669-bb0e-3fcfef9a7335)

After that, just hit Submit and the app will do its magic and retrieve the API metrics information from the Runscope API. :)

Remix Your Own Projects!

One of the cool things about Glitch is that you can check the source code for all of the projects above, and remix them to make your own apps. First, open the details page for the project you want to remix, and click on the "Remix your own" button:

The Glitch UI showing the details of the runscope-batch-edit project, and the buttons Preview, View Source, and Remix your own at the bottom

After that, we'll need to setup our OAuth 2.0 authorization flow:

  • On Glitch, make sure you're on the code editor for your new app. Click on the "Show Live" button at the top to open your app in a new tab, and copy the app's URL
  • Go to your Runscope account Applications page, and create a new application. Give your application a name, and paste your app's URL in the website field. For the callback field, paste your URL again, but add `/auth/runscope/callback` at the end. Click on Create Application. 
  • Copy your Client ID and Client Secret values.
  • Go back to your project's code editor on Glitch. Open the `.env` file on the left-hand side, and paste your Client ID and Client Secret in their respective variables (make sure you don't add spaces between the "=" symbol).
  • You should be all set! On the Glitch editor for your project, click on the "Show Live" button and test your Runscope authorization flow.

I've really enjoyed using Glitch for the past few days to build out those few sample projects. Not having to worry about setting up a developer environment, or thinking about deployment and hosting is amazing, and checking out the source code for other projects really helped me build out these prototypes.

I hope you can use some of those projects to remix your own Node.js applications, or maybe a learn a thing or two about using the Runscope API, the Passport and request Node.js packages, or the C3.js project!


If you're new to API monitoring and testing, learn more about Runscope's cloud-based API monitoring solution and sign up for your free trial account today.


Runscope is Joining CA Technologies!

$
0
0
ca-runscope.png

Today I’m excited to announce that Runscope has been acquired by CA Technologies. We’re bringing our market-leading API monitoring tools to CA to further our shared mission of equipping developers with the tools they need to deliver and operate mission-critical APIs powering the modern enterprise.

Five years ago, Frank and I saw an opportunity to build a new class of developer tools for modern, API-driven applications. We were joined by a fantastic group of investors and team members passionate about building great products driven by an obsession with customer success.

Today, Runscope is used by over 1,200 customers to run 19,000,000 API uptime and data validation checks every day. Companies of all sizes use Runscope to monitor the APIs behind innovative experiences for retail shopping, online commerce, media delivery, IoT, self-driving cars, and so much more. In the enterprise, microservices architectures are redefining how companies ship and evolve software.

As we’ve come to know the CA team, we’ve been impressed with the breadth and power of their API lifecycle products, their ability to deliver across cloud and on-premises environments, and their history of successful acquisitions like Layer 7 and BlazeMeter. 

The future of Runscope is bright. The entire Runscope team is joining CA to continue executing on our customer-driven roadmap. We’re excited to have CA's backing and resources to allow for this roadmap to be realized. In addition, we can now fulfill a broader set of customer demands through integrations with BlazeMeter, CA API Management, and CA APM. Additional delivery options for customers with compliance requirements are also a high-priority for us. Lastly, we’re eager to work with the API Academy team to educate people about API monitoring best practices.

Runscope would not be in the position it is today without the support of so many people. It’s been a rollercoaster ride, and we are eternally indebted to those who have been involved along the way. Above all, I want to thank our customers. From day one we’ve been driven by your (often enthusiastic) feedback. You kept us going when times were tough. Thank you for trusting us. Our first customer is still a customer today, and I hope for a long time to come.

As we start this new chapter in the history of Runscope, our mantra remains the same: 

Everything is going to be 200 OK®
 

How to Sync your OpenAPI Schema in Stoplight with GitHub and Runscope

$
0
0
A text editor with an OpenAPI 3 schema for a Hello World API.

This is a post from our Featured Guest Series! Glen Semino shows how to combine Stoplight and GitHub APIs with Runscope to keep your OpenAPI Schema always versioned and up to date.

If you're interested in sharing your knowledge on our blog, check out our Featured Guest Series page for more details.

About a month ago after I and part of the SYNQ team attended the APIDays SF conference, we reflected on what we had learned at the conference. One of the things we realized was that our API spec documentation needed quite a bit of improvement. And among the tools discussed in the conference was Stoplight, which helps one design, document, mock, and debug APIs. 

We decided to give Stoplight a try to rewrite and edit our API spec. Once we started, I noticed that I was often manually syncing our Open API spec (OAS) file that Stoplight generates with our GitHub repo. I wanted a way to automate this process so that regardless of what gets edited/changed in Stoplight, Stoplight and GitHub are always in sync. 

This is where Runscope came to save the day. Using the export function Stoplight offers in addition to GitHub’s API, I was able to automate syncing our Stoplight OAS spec file with our GitHub repo every minute using Runscope. In this tutorial, we're going to walk through this workflow step-by-step so that you can do it too!

The Setup

These are the things you will need to do to create the necessary API requests in Runscope to automate the syncing process:

  • Generate a personal access token from GitHub for using the GitHub API
  • Obtain the export URL for your spec file from Stoplight
  • Commit your spec file from Stoplight into GitHub manually just once, so that it has a place in your GitHub repo 
  • Get the path to the spec file in GitHub you want to commit to, along with the file name, to generate a URL like this: https://api.github.com/repos/SYNQ/spec/contents/oas.json (replace bold parts)

What we will be doing first is obtaining our spec file from Stoplight, then committing the updated spec file into GitHub (in the same place where we manually created one). The process is then automated by running it on a schedule in Runscope. 

If you want to use a template as a starting point, scroll to the end of the blog post to download a JSON file you can import to your Runscope account that includes a template of all the requests you need to create.

Obtaining your Spec file with Runscope

To start, we will be retrieving our spec file from Stoplight using a simple ‘GET’ request to our spec’s file export URL. 

The export URL from Stoplight will look something like this:

  • https://api.stoplight.io/v1/versions/{version_id}/export/oas.json

We can obtain by following the instructions provided in Spotlight doc's "Exporting to Swagger or RAML". 

After we have the export URL, we need to create a ‘GET’ request in Runscope using the request editor. It will look like this:

A Runscope GET request step to the api.stoplight.io export URL

For the ‘Assertions’ on our ‘GET’ request, we want to do:

  • Request ‘Status Code’ equals 200
  • Text Body ‘is not empty’ 
The Assertions tab from the previous Runscope GET request step, with the two previously mentioned assertions set up.

And lastly, for the Variables section of our Runscope request, we want to:

  • Store ‘Text Body’ in a variable 'json_body'
The Variables tab from the previous Runscope GET request step, with the previously mentioned 'json_body' variable set up.


Let's save and run this request to make sure it works. Assuming all went well, we are now ready for the next request. 

Getting a Commit from GitHub

When using Git to do a commit, we need to have the SHA (unique ID or hash) for the specific file we want to commit to. To get that, we will make another ‘GET’ request. This time it will be calling the GitHub API to get the SHA for the specific file we want to commit to. 

The URL we will be making a GET request to in Runscope will look something like this:

  • https://api.github.com/repos/SYNQ/spec/contents/oas.json

The URL above should point to the path and the name of the spec file we will be committing to. 

After we have the URL to call the GitHub API, we will create a new request in Runscope. To call the GitHub API, we will need the personal GitHub access token that we created as part of the setup section. To authenticate with GitHub, we will need to add an ‘Authorization’ header and then provide our token. It should look like this:

A Runscope GET request step to the api.github.com URL for a JSON file in a repository, with the Authorization header set to "token {your_personal_access_token}".

Note: make sure your Authorization header includes a single space between "token" and the token key.

For the ‘Assertions’ on our ‘GET’ request, we want to assert that the:

  • Request ‘Status Code’ equals 200
  • JSON Body ‘is not empty’ 
The Assertions tab from the previous Runscope GET request step, with the two previously mentioned assertions set up

Lastly, for the variables section of our Runscope request, we want to:

  • Store the ‘JSON Body’ property ‘sha’ into a variable 'current_sha'
The Variables tab from the previous Runscope GET request step, with the variable 'current_sha' set up.

Save, run and make sure everything works again. Now we are ready for the last step of committing to GitHub. 

Committing your spec file to GitHub

The last request that we need is a PUT request to the GitHub API to commit our updated spec file from Stoplight. 

We can use the same URL as for the previous request we just created, for example:

  • https://api.github.com/repos/SYNQ/spec/contents/oas.json

Using the same URL from your previous request, we can now create a new ‘PUT’ request and pass in our credentials, plus all the information needed to do the GitHub commit. 

Using the request editor, add in our PUT request. After that, we will need to set headers and parameters as described below. 

For the headers in our request, we will need the following:

  • An ‘Authorization’ header to pass in our personal token
  • A ‘Content-Type’ header set to ‘application/json’
A Runscope PUT request step to the api.github.com URL for a JSON file, with two Headers set: the Authorization header with value "token {your_personal_access_token}", and the Content-Type header set to 'application/json'.

Note: again, make sure your Authorization header includes a single space between "token" and the token key.

For the ‘Parameters’ section of our request, we will use all the variables saved in the previous two requests. The GitHub API requires a message, the SHA for the file we want to commit to, and the spec content itself. The JSON formatted parameters we will be sending will look something like this:

{
   "message": "updated-runscope-{{timestamp}}",
   "sha": "{{current_sha}}",
   "content": "{{encode_base64({{json_body}})}}"
}

The variables in the above example represent the following:

  • timestamp - is a built-in variable in Runscope that gives you an integer Unix timestamp.
  • current_sha - is the SHA for the file we want to commit to, that was obtained in the GET request to the GitHub API.
  • json_body - represents the text body we saved from the GET request to the Stoplight export URL.

Here is a what the ‘Parameters’ section in Runscope should look like:

The Parameters section of the Runscope request including the code snippet in the previous code box.

The Parameters section of the Runscope request including the code snippet in the previous code box.


For the ‘Assertions’ on our ‘PUT’ request, we will want to assert:

  • Request ‘Status Code’ equals 200
  • JSON Body ‘is not empty’ 
The Assertions tab of the previous Runscope PUT request step, showing the two assertions mentioned above set up.


Now one last time, save and run to make sure things work. In addition to that, let's check that there is now a new commit in GitHub.

The GitHub interface for the commits in a repository, showing a single commit for the updated specification file after running the Runscope requests.

Assuming it all went as intended, we now have a Stoplight GitHub synchronizer in Runscope!

The last thing to do is simply run the monitor you created on a schedule following Runscope’s instructions found here

Importing a Runscope Monitor Template

If you're already familiar with Runscope, the GitHub API, and Stoplight, or you're just looking to get a head start on this tutorial, we have created a JSON file that you can import into your Runscope account. That includes all the steps described in this tutorial, and all you need to do is change the variables for your GitHub access token, and the URLs for GitHub and Stoplight.

You can download the JSON template file by clicking here, and you can find instructions on how to import it here.

Wrapping it Up

Now we can repeat these steps for any other spec file you want to synchronize between Stoplight and GitHub. In the beginning, when I had thought about this problem or how I would create a system to allow me to do this, it was actually taking me a few days. Once I decided to give it a try with Runscope, I was pleasantly surprised with the results and I was able to complete the task within a day. Runscope has gone from being a tool we only used to monitor our APIs to a tool that we can also use to automate processes like the one we have described here. 

If you have any questions about the process, feel free to reach out to me via Twitter @glensemino.

Design Thinking and Wicked Problems for APIs

$
0
0
Photo by Patrick Perkins on Unsplash. A wall filled with post-its of different colors. The two closest post-its to the left are pink and have it written on them "Impact Full" and "Pow Wow".

This is a post from our Featured Guest Series! Ash Hathaway shares her experience as a former developer turned product manager for APIs, and how design thinking has helped her team solves difficult technical problems.

If you're interested in sharing your knowledge on our blog, check out our Featured Guest Series page for more details.

You may have heard of design thinking or even participated in a workshop using lots of sticky notes. Done correctly design thinking is an insanely fun way to generate tons of ideas with your team, create buy-in, and leapfrog ideas all centered around your user. So, what is it? And why does it matter for APIs?

Design thinking is a way to solve complex and multidimensional problems smarter together. The roots of design thinking are in human-computer interaction design which evolved into a framework to innovation. More specifically it touts methods to find overlap in business strategy, technological feasibility, and user needs. It is a “process for creative problem solving,” according to IDEO, an international design consulting firm and large proponent (some might say the OG) of design thinking in mainstream tech today. 

Why design thinking makes sense for APIs

So what does this have to do with APIs? APIs are like super technical and deal with code. That has zero to do with design, doesn’t it?

An animated GIF of Lisa Simpson in a classroom saying "But that's not science."

Well, I might ask you first: Do APIs require multiple stakeholders buy-in? Are there conflicting needs and goals of the API? Are there questions into what features should be improved on or prioritized? If you’re wondering perhaps how to improve the overall experience of your API or are lustful over the experience of some API offerings out there then I might say you have a design problem. Unless your company is super flat or product is completely 100% open chances are there are conflicting viewpoints somewhere.

Design is more than just what color your logo is. Experience design is more than just wireframes. When we start to think about the thoughts that enter a human's mind before they ever search for a tool, when we look at their emotional state while working with a product, when we consider an overall ecosystem of users and their hopes and fears, we’re actually considering design. 

Design for APIs means making that first time experience dead simple and delightful. It means delivering value to your user right away. It also means continuing the “wow” factor or at the very least a “this doesn’t cause any trouble” factor from then on out.  It means having an experience that benefits a new coder as well as a principal software engineer. It means understanding the why, the how, and the meaning to the people you’re building for

APIs as user-centric experiences

When building an API we can get wrapped up in assumptions at times. Of course we all know that 404 means “not found”. We know the difference between PUT and POST. REST APIs are obviously better than SOAP. DUH. But wait… why? To a new user these things might be very foreign concepts. To a developer working for the Government perhaps SOAP is still a thing. 

Design thinking asks people to consider their user in a very specific and very purposeful way. For instance, if I’m writing documentation for a seasoned engineer I might explain things differently than to a junior dev. Just as I might make a getting started tutorial differently for a Rubyist than a Java engineer or suggest different tools to someone wanting to use R. If it’s going to be important to a developer to make the case to their management I might put more emphasis on developing marketing material like whitepapers or webinars for executives as opposed to another sandbox demo.

An animated GIF of Jimi Hendrix during an interview saying "Different strokes for different folks. That's all I can say."

Divergent/Convergent Thinking

More specific benefits are what design thinking points out as divergent/convergent thinking
Basically divergent thinking creates a bunch of ideas and convergent decides upon those ideas. What’s great about this is that if you get a bunch of people thinking about the same problem a little differently after diverging there are a ton of different ideas all considering the same problem from different vantage points. You begin to get a 360’ view of a problem. 

Then, converging, means finding patterns in these vantage points. One person may have uncovered a huge pain point which suddenly shifts the focus of user needs. Perhaps it’s very clear that the majority of issues stem from one source and clearly should be worked on first. 

I’ve run through this exercise a few times with various products ranging from huge SaaS offerings to an API experience. Within the span of literally 10 minutes a broad team with different motivations can see the landscape of the issue more clearly, generate a ton of solutions and ideas on how to solve it, and then agree upon direction. *Magic*. 

In one instance of running this exercise with my coworkers, the most valuable part of the experience for me was finding out the different pain points or worries from the team. They saw the product in a different way than I did. We had similar viewpoints, but they saw the product through the lens of a head of engineering for instance. There were more thoughts into the actual building and what the team was working on in addition to this particular part of the product. I saw it through the lens of a product manager. There were more thoughts into the effects of the market or competitive strategy. When we got it all laid out on the table and started talking about priorities, planning got far easier.

An animated GIF of a 3D shooting star with a rainbow trail and the text "The More You Know" above the trail.

Remember the empathy with your code

Another often underutilized tool out of the design thinking hat is the empathy map. Most of the time I’ve seen teams go “yeah yeah yeah we have personas. Check.” But, I ask, what shoes would your user wear? Where do they go on vacation? To get really prescriptive- what type of wine do they bring to a dinner party? Do they even go to dinner parties?

While this might seem in the weeds what this asks is for the API designers (and by designers I mean anyone responsible for creating or working on an API experience) to empathize with their users. To actually feel their pain points. To so clearly be able to decide what feature should be prioritized. To see where they get confused in the documentation and why people keep dropping off at a certain point in the onboarding process. 

Empathy maps can again literally be accomplished in 10 minutes. It’s best to get a large group of stakeholders from various groups and if at all possible bring actual user research to the group. Firsthand accounts of usage or testimonials (good and bad!) can be invaluable. Next it’s literally taking your best guess as to what that hypothetical persona thinks/feels/says/does and what their pains & gains are in using your product. 

For some, this activity might seem ridiculous. And in a way it is I suppose. However, I would argue that if at the end you’re able to better understand the people you’re building for then it will result in shipping the right thing for the right person at the right time. That’s invaluable. 

I’ve done this activity remotely, in a giant session with 10 people, or in a small group of just designers. It doesn’t need to be super fancy. The key is to put yourself in someone else's shoes for a moment and then capture that feeling to reference it again and again. Pro tip: bringing in user-researchers in this step can be super helpful. They can provide fantastic research and eliminate all sorts of assumptions about your user and target audience.

An animated GIF of the cast of Saved By the Bell where 6 people are doing a high-five together.

Let’s do this!!

Curious to learn more about design thinking? Have no fear! There are plenty of resources available:

Ash’s goal is to make APIs friendlier and more usable for all. As a former developer turned product manager for APIs and open-source software products she has seen first-hand the impact of a great developer experience. She has consulted for international and domestic corporations and startups and most recently was on the original IBM Watson Developer Cloud product team. Her process is rooted in empathy, collaboration, and getting ship done together. She is available for consulting and wants to learn how she can help.

Viewing all 231 articles
Browse latest View live