Quantcast
Channel: Runscope Blog - API Monitoring and Testing
Viewing all 231 articles
Browse latest View live

Integrate Runscope API Tests into TeamCity Builds with New Plugin From MANGOPAY

$
0
0
mangopay.jpg

Hugo Bailey recently reached out to us to let us know about a new project his team created to include Runscope API tests in their TeamCity build pipeline. The plugin was developed by Bertrand Lemasle and is available on GitHub. If you use Runscope and TeamCity, be sure to check it out!

We were eager to learn more about how MANGOPAY uses Runscope to keep their API healthy, so Hugo and Bertrand agreed to an interview we'd like to share with you.

What is MANGOPAY?

MANGOPAY is a payments API. Unlike traditional payment solutions, we provide our customers with a fully customizable technology.  The API enables websites such as marketplaces or crowdfunding platforms, to accept payments, hold the funds on segregated accounts, and manage the pay-outs. Thus, it’s a truly end-to-end payment solution. The API also has various usual features to handle KYC, chargebacks (and disputing them) and detailed reporting.

Our solution is ideal for marketplaces, crowdfunding platforms and the sharing economy. Our clients, either in-house developers or external agencies, can integrate our white-label API directly into their solution. We offer a free sandbox environment where developers can fully test the API. We also provide SDKs and kits in a variety of languages to facilitate and speed up the development process.

What led you to build the TeamCity plugin?

We aim to have one major and one minor release per month. The process of shipping these releases is fairly standard - in each case, it’s developed in our integration environment where tests are continually run to ensure everything is going smoothly. Then, once everything is completed, we’ll publish the release to our internal testing environment whereby the product managers can run further non-regression tests, and of course specific tests for each development that was done.

We rely a lot on our API tests. In addition to unit tests, we use API tests as non-regression and functional tests to prevent any unintended outcome during our development process. In the past, we used a homemade tool in conjunction with a unit test framework that allowed us to write, test and chain multiple API calls. While functional, this setup was not ideal as it required a lot of time to setup a complete test correctly. Moreover, this tool was tied up with our API and we spend hours just to keep the tool up to date.

Runscope seemed the right fit to replace this. But to be usable in our continuous integration, we needed Runscope tests to be understood as actual unit tests. And it was this that led us to create a TeamCity plugin.

As TeamCity now understands every single step of each Runscope test as a unit test, we simply replaced our unit tests with Runscope tests. Runscope is now fully integrated into our CI, and from a developer point of view, almost nothing has changed. We even have a more detailed output now than before thanks to the Runscope assertion details.

What are your plans for the plugin going forward?

We would really like to improve the plugin and offer more features. Today the plugin is far more powerful than we first imagined but that has made the configuration harder. For example, we would really like to make the configuration easier by making it possible to choose buckets and tests from a list rather than supplying raw IDs.

Improving plugin stability seems a priority too, given the troubles an unstable CI process can bring along.

We’re not using Java in our day to day programming languages. That means that our plugin could be somewhat clunky and there is nothing we’d love more than to see people get in touch with us via our the Github repo and start to contribute and improve the plugin within the weeks or months to come.


Our thanks to Hugo and Bertrand for taking the time to answer our questions. If you're interested in helping contribute to the plugin, be sure to head over to the GitHub project and participate. If you're new to API testing, learn more about Runscope's cloud-based API testing solution or sign up for your free trial account.


Introduction to GraphQL

$
0
0

During the past year or so you might have heard or come across an article about GraphQL, especially if you're working with React. More recently, GitHub announced GraphQL support to one of their APIs. If you're wondering what GraphQL is and how it works, below is a quick primer.

In this article, we'll cover the basics of GraphQL, with some examples of how you can easily test a GraphQL API using Runscope.

What's GraphQL?

GraphQL is a query language, created by Facebook in 2012. They have been using it internally and in production for a while, and more recently they have published a specification for it, which has received a really positive response from the developer community. Companies like GitHub, Pinterest, and Coursera, have already started adopting it and using externally and internally.

A few advantages of using GraphQL are:

  • Hierarchical - query is shaped like the data it returns.

  • Client-specified queries - queries are encoded in the client rather than the server. They return exactly what the client asks for, and no unnecessary data.

  • Strongly typed - you can validate a query syntactically and within the GraphQL type system before execution. This also helps leverage powerful tools that improve the development experience, such as GraphiQL.

  • Introspective - you can query the type system using the GraphQL syntax itself. This is great for parsing incoming data into strongly-typed interfaces, and not having to deal with parsing and manually transforming JSON into objects.

One of the biggest potential advantages when using GraphQL is having an efficient way to get resources from an endpoint by querying for exactly which resources you need. It can also help in retrieving resources that would typically take multiple API calls, by using a single request instead.

Here's an example GraphQL query to retrieve a specific person from an API:

{
 user(id: 1234) {
   name
   profilePic
 }
}

And here is an example response:

{
 "user" {
   "name": "Luke Skywalker"
   "profilePic": "http://vignette3.wikia.nocookie.net/starwars/images/6/62/LukeGreenSaber-MOROTJ.png/revision/latest?cb=20150426200707"
 }
}

That is a really simple example, but it shows some of the underlying design principles of the language. For example, the user schema could include multiple fields, but our query can define just the necessary information that our application needs.

There's much more to GraphQL, and I highly recommend heading to GraphQL's official website and reading through its docs to learn more about it.

How to test a GraphQL API with Runscope?

Testing a GraphQL API with Runscope is pretty similar to testing a REST endpoint. For this example, I'm going to use GitHub's GraphQL Projects API. It's still in early access, but it's easy to sign up for it and start using it.

Let's start with an introspection query, which will return our API's GraphQL schema. First, we can set our Authorization header with GitHub's API key in our Environment settings, which will be shared with all our requests.

GET Introspection query

Next, we can quickly set up our introspection query test by doing a GET to https://api.github.com/graphql:

After that, click "Save & Run" to run the test and get back the full GraphQL schema from the GitHub Projects API.

POST query

We can also test a simple query to get our GitHub username by making a POST query, and adding a JSON body to our request with the query parameter:

The API returns a JSON formatted response to us, and its structure is the same as our request. We can easily add an extra assertion to ensure we're getting the correct data by accessing the property data.viewer.login:

If you're still getting familiar with writing queries, it can easier to first build your queries using GitHub's GraphQL Explorer, and then paste them in your tests.

POST query with arguments

Here are a couple more example queries that we can make. First, we can make a query to get an issue ID from a repository, and save that as a variable for use in later tests:

POST mutation

Next, we can use a mutation query to add a reaction to the issue we grabbed in our last test:

What's Next

GraphQL is still in its early stages, but it already has an active and lively community, and it's being adopted by some big companies. There are a lot of challenges ahead of it becoming more widely used, but just like any other API, testing and monitoring to make sure that everything is 200 OK is crucial.

If you're new to API testing, learn more about Runscope's cloud-based API testing solution or sign up for your free trial account.


Additional Resources

APIs

GitHub Repositories

Blog Posts

Videos

Announcing Microsoft Teams Integration

$
0
0

At the beginning of November, Microsoft joined the chat-based workspace apps with Microsoft Teams to compete with Slack, HipChat, and others. It's fully integrated with Office 365, and has features that users of other apps have come to know and love such as bots and connectors, and some new cool ideas like tabs.

Here at Runscope, we know that regardless of the chat application your company uses, monitoring your APIs and being quickly notified of any errors is important for any team. That's why we are making a preview of our Microsoft Teams integration available to all customers. Here's how you can get started!

Setting up Microsoft Teams

First things first, log onto your Microsoft Teams account. We recommend creating a new channel for our monitoring notifications, as sometimes people in different teams might be interested in being notified of any API errors.

After you choose which channel you want to send notifications to, select the "Teams" tab on the left-hand side, hover over your channel, click on the ". . ." symbol, and select "Connectors".

You should see a new window with a list of multiple connectors. In our case, select the "Incoming Webhook" option and click on "Add".

In the next screen, you can select a username and a profile image for the notifications. In our case, we'll just use "Runscope", and set our logo as the image.

Note: click here to download our logo.

Finally, the last thing you have to do is just copy the Webhook URL that Microsoft Teams generates for you.

Setting up Runscope

Now head over to your Runscope account. After logging in, click on your profile picture on the top-right and select "Connected Services".

You should see all the integration options available for your Runscope account. Look for the Microsoft Teams options and click on "Connect Microsoft Teams".

In the next page we have three fields:

  • Webhook URL: paste the URL you got from the Microsoft Teams webhook configuration here.
  • Channel Name: you can add multiple Microsoft Team connections to the same Runscope account. The name can help you distinguish between multiple connections.
  • Notifications: select when you want to be notified after test runs are completed.

After you fill out all the fields, click on "Connect Account"!

Now, for our final step, we just have to activate our notification for our tests. Head over to the API tests you want to monitor in your Runscope account, and select "Editor" on the left-hand side.

Click on your environment settings to bring down the full options menu, and select "Integrations". Finally, click on the toggle for the Microsoft Teams service we just connected, and you're all set!

If you chose to get a notification whenever your test runs are completed, you can just click on "Run Now", and you should see a notification pop-up in your Microsoft Teams channel similar to this:

Conclusion

This integration is a preview, so please note that things might change in the future. If you have any feedback, we'd love to hear from you! Please reach out to us at help@runscope.com.


If you're new to API testing, learn more about Runscope's cloud-based API testing solution and sign up for your free trial account.


Related Resources

Alert Your On-Call Team of Production API Failures & Anomalies with PagerDuty

$
0
0

When the production APIs that your apps rely on are down or returning unexpected data, notifying your team about these problems is a top priority. Broken services means broken apps and the faster your team is aware of issues, the faster you can solve them.

We’ve teamed up with PagerDuty to integrate Runscope Live Traffic Alerts into the comprehensive incident notification and management system.

PagerDuty provides on-call team management as a service with escalation policies, scheduling and a full complement of notification methods including SMS, phone calls, email and push notifications. Live Traffic Alerts allows you to catch API call failures and exceptions that occur in production. By combining these together, your on-call team can receive notifications about production API call failures as they happen using your existing PagerDuty escalation and notification policies.

Connecting Live Traffic Alerts to PagerDuty

Getting started is easy. Log in to your Runscope account and navigate to Alerts in the top navigation. (If you don't see Alerts, contact our Sales team about getting started.) While creating a new alert or editing an existing one, select PagerDuty from the integrations list. Once your PagerDuty and Runscope accounts are linked together, you can connect PagerDuty to as many Runscope Alerts as you wish.

Automatic Alerts, Integrated Acknowledgement

When a Runscope Alert is triggered, a PagerDuty incident is automatically raised. You can optionally resolve the raised PagerDuty incident by clicking the acknowledgement button on the Alerts dashboard. All details related to Runscope triggered incidents and resolutions are shown in the PagerDuty activity log.

 

Start Catching Production API Failures Today

PagerDuty integration for Live Traffic Alerts is available now. Runscope customers on Medium and larger accounts have access to Traffic Alerts, as well as those on the free trial. Sign up for Runscope today to get started using Live Traffic Alerts. If you're currently on a Free or Small plan and want to see Live Traffic Alerts in action, our Support team will set you up with a trial, or contact Sales to upgrade your plan.

This Fortnight in APIs, Release VIII

$
0
0

This post is the eighth in a series that collects news stories, helpful tools and useful blog posts from the previous two weeks and is curated lovingly by the Runscope Developer Relations team. The topics span APIs, microservices, developer tools, best practices, funny observations and more. This post is also our final Fortnight of this year. Thank you for giving us such great stories to share, and get excited for more Fortnight in 2016!

For when you want to eat your own dogfood, but there’s not a server in sight:

We’ve largely moved away from running bare metal servers in the data center to virtual servers in the cloud. The same can be said of migrating disk, SAN and NAS storage into virtual volumes in the cloud. The team from Teletext.io chronicles how they’ve built their startup entirely on AWS, taking virtualization to the next level and removing all semblances of traditional server architecture. In The Serverless Startup—Down with Servers!, the team explains how they use Amazon API Gateway, Lambda, DynamoDB, S3 and CloudFront—without a single server to manage, not even EC2.

For when visibility is in the eye of the beholder:

In 2015, microservices was certainly one of the biggest buzzwords, sparking debate from both micro- and monolith-supporters about scalability, operational complexity and resources over which is the better infrastructure. The Segment blog recently released a post that explores a different angle: visibility. In Why Microservices Work for Us, the API-led analytics company’s founder Calvin French-Owen describes how monitoring microservices makes it significantly easier to gain visibility into each “microworker”, and how his 10-engineer team has scaled to 400 private repos and 70 different services.  

For when your newspaper starts printing in JSON:

On The New York Times’ blog Open, written by NYT developers about code and development, the company announced that it is open sourcing an internal projectGizmo is a toolkit that offers four packages to help developers configure and build microservice APIs and pubsub daemons. Gizmo came from the company’s years-long move to adopting Go, which it primarily uses for writing JSON APIs. This is an exciting move for NYT establishing itself as a technology company, and the toolkit is great for anyone looking to adopt Go and microservices.

For when all you’re asking for is a little accept (just a little bit) #Aretha:

You most likely use Content-Type in your POST and PUT calls, which informs the API the type of data you’re sending. But how often do you use the Accept header? That’s the other half of content negotiation where you designate the type of response you’d like to receive from the API. This article from Restlet helps you to understand the basics of content negotiation.

For when you use continuous deployments—do or do not, there is no try:

There are a lot of things we know we should be doing, but the barrier to entry is great enough that we shy away—such is the case with deployment pipelines, a process that many are quick to tout the value of, but not as many have had the time to invest in building out internally. In The Case for Deployment Pipelines, Keith Casey, Director of Product at Clarify.io, explains the tools and processes his team used to build continuous delivery pipelines on microservices and AWS. With CD pipelines, Clarify.io now experiences improved workflow, increased confidence and reduced risk, and Casey explains these benefits in depth.

For when you think your docs are all that and then some:

So you’ve defined your API using Swagger or API Blueprint, spun up the docs generator and the API documentation is all done, right? Not quite, writes James Higginbotham. In this article on the LaunchAny blog, he suggests that API docs have audiences that include more than just developers, and that we need to go beyond offering only referential documentation.

For when there are some habits you just don’t want to break:

Monitoring can be a loaded term, and the practice extends from your servers, to your APIs, to third-party APIs, to apps. Librato recently published 7 Habits of Highly Successful Monitoring Infrastructures to help take your monitoring practices from creating problems to solving problems. This article covers all the bases: why you’re monitoring (data, feedback, single source of truth), what you’re using (micro vs. monolithic systems), when to monitor (throughout the entire development lifecycle) and more. In a pinch, you can view a snapshot of the post on the Librato blog.

For when you want to start the new year design-first:

If you couldn’t make it to API Days New Zealand this October, the video of the keynote presentation by Jason Harmon, Head of API Design at Paypal, is now available. Jason presents on Design-First APIs in Practice, beginning with a deep-dive into design principles that are commonly held outside of the software world, but have yet to be widely recognized within software. He then shows how you can apply these principles to your APIs to improve efficiency, discovery and adoption. 

For when you need to find the force within you:

While we’re always looking for the latest tools, it never hurts to take a step back and learn about organizational health and leadership. In a look back on 2015, Codeship has compiled the best of its many interviews with tech leaders over the year in a piece chock-full of advice that answers, What Are the Challenges of Leading in Tech? This post draws on experiences from leaders like Brendan Schwartz, CTO of Wistia; Peter Van Hardenberg, founding developer of Heroku Postgres; and our own Co-founder and CEO John Sheehan.   

Runscope Chats with API Thought Leaders at the API Strategy & Practice Conference in Austin

$
0
0
Neha Sampat of Built.io presents a keynote on day 2 of the conference, image from APIStrat.

Neha Sampat of Built.io presents a keynote on day 2 of the conference, image from APIStrat.

Last month, we had the pleasure of participating in the API Strategy & Practice Conference (APIStrat), one of the leading API conferences for developers to come together and discuss API trends, tools, successes and challenges. APIStrat, organized by APIEvangelist and 3scale, draws an enthusiastic crowd and impressive lineup of speakers spanning API practitioners and consumers from a variety of industries. Runscope Co-founder and CEO John Sheehan spoke on Crafting a Great Webhooks Experience, and VP of Developer Relations Neil Mansilla gave a talk on The Journey to Effective API Evangelism. One of the highlights of the conference was the live recording of the final episode of the Traffic & Weather podcast featuring John Sheehan and Steve Marx, Engineering Manager at Dropbox. 

We sat down with some of the speakers at APIStrat to chat about how APIs are changing the business conversation. Hear from Jason Harmon, Head of API Design at Paypal; Kristen Womack, Director of Product at LeadPages; Kin Lane, The API Evangelist; John Sheehan and Steve Marx in the video below, and let us know what you're excited to see in the API world in 2016 in the comments!

Looking for more APIStrat? Dive into these conference recaps:

An Inside Look at 7 Standout APIs

$
0
0

Being an API service provider, we’re constantly on the lookout for how developers are building better APIs. As an API consumer—we rely on dozens of APIs internally and externally to power our business operations—we appreciate well designed APIs that are easily discoverable with a clean interface and quick ways to get started. This year, 7 APIs stood out to us that met all of these considerations.

We chatted with the people who have their hands and hearts in 7 leading APIs to get a peek behind the interface at the challenges they face and the solutions they use. These APIs have some of the best overall developer experience out there, which includes:

  • Immediate utility, like quick shipping integration tools from EasyPost or producing an audio or video transcript with Clarify.
  • Intuitive documentation, like the responsive navigation in multiple languages from Stripe and Clearbit.
  • Top-notch support, like custom integration guides from Chain and SendGrid’s practice of understanding multiple use cases.
  • Reliability, like Plaid publishing uptime stats.

Let's take a look inside these 7 standout APIs to see some of the ways they maintain an excellent developer experience:

1. Chain: Invest in Customer Onboarding

Chain provides modern database technology for the financial services industry with a platform that allows developers to design, deploy and operate blockchain networks. We asked Ryan Smith what elements go into creating a great developer experience:

“We believe that if financial services are built on top of excellent database infrastructure, the excellence will permeate all the way to the end user. To that end, we try are hardest to make the developer/customer experience as great as it could possibly be. We do this in a number of ways.

Instead of having generic API tutorials or guides, we tailor an integration guide specifically for our customer. This is basically a markdown file with a precise plan of what methods to call and when to call them. We also go to great extremes to provide good errors. Every error has a unique code (e.g. CH123) so that we can quickly debug issues when customers hit the error.

Additionally, we invest a lot in our SDKs. The SDKs will always include a request ID in the description of the exception. This way, when our customers get an exception, they just past the details to us and we can quickly go to Splunk to trace the path of the request. We write canaries using our SDKs that exercise the complete set of functionality offered by our API. We then use Runscope to automatically run the canaries and alert us if the canary should fail."

—Ryan Smith, Co-founder and CTO at Chain

2. EasyPost: Maintain Simplicity

The Simple Shipping API, EasyPost allows developers to integrate shipping and tracking info into their apps. We spoke with Saywer Bateman about how EasyPost overcame a familiar challenge with its API:

“This year we've grappled frequently with a common challenge for APIs—maintaining simplicity despite the frequent addition of capabilities. Specifically, we've recently added a lot of capabilities to our address verification product and wanted to avoid naively introducing a handful of new endpoints that would increase the complexity of our API. Instead, we spoke with customers and analyzed our logs to identify a new workflow that better matches how this API is being utilized.

For instance, rather than having distinct endpoints for verifying U.S. addresses, global addresses, or whether an address is residential or not, you can now send EasyPost an enumerated list of verification types to perform when you create an address. This workflow fits our best practices perfectly, is extensible, and reduces the number of requests to our API that customers need to perform.”

—Sawyer Bateman, Lead Project Designer at EasyPost

3. Plaid: Know Your Audience

Plaid is disrupting the financial industry with an API that powers two products, Plaid Connect that collects transaction data and Plaid ACH Auth to help developers set up ACH payments. We asked William Hockey about the company’s API learnings from 2015:

“I think a big learning experience and insight is you can't make everyone happy. It’s a hard combination to make an API both easy to use and simple for someone just trying to hack together a side project—but also full featured and powerful enough for your power users. Also, the Silicon Valley developer standard isn't always the best choice. We've had to deploy in enterprise settings where their gateways won't allow standard HTTP verbs like PATCH and DELETE, so every route has to be aliased to a POST request.”

—William Hockey, Co-founder at Plaid

4. Clearbit: Manage Change for a Seamless Customer Experience

For Clearbit, which provides business intelligence, its business relies on a robust set of six customer APIs, plus multiple pre-built integrations like one with Google Sheets. We spoke to Harlow Ward to learn more about how his team manages its unique API versioning process for a seamless customer experience:

“When introducing breaking changes to an API, you want your customers to keep the version they already have and update when they are ready—so they can test the changes in their own systems before releasing any changes downstream to their users. We use point-in-time versioning, which is a little complex, but it has advantages. For instance, it allows us to make adjustments within our codebase without having to create new endpoints within our system. It also allows us to easily see which API versions our customers are using. Since we’re anticipating quite a few changes to our APIs in the future as we improve them, this approach works well for us.

The key to all this of course is managing change on the backend so that API versioning is seamless for our customers. Basically, we need assurance that our versioning doesn’t break and that we can maintain this style without a lot of complexity. By running a schedule of regression tests against our API endpoints, we’re able to support tons of API versions transparently. It’s easy for us to build and update tests and add custom headers and versioning so that there are no unknowns.”

—Harlow Ward, Developer and Co-founder at Clearbit

5. Stripe: Keep the Learning Curve in Mind when Introducing API Changes

Payment infrastructure provider Stripe has been focused on developers since day 1, and its API is constantly cited as an example of how to do developer experience right. We chatted with Ray Morgan about how Stripe updated its API this year:

“One thing that I am super excited that Stripe has been able to do over this last year has been to start introducing new payment types into our API. This has been particularly tricky since we don't want force our users to upgrade their API version in order to get a new feature; and at the same time, we don't want to just bolt on new features.

For example, when fetching a list of Stripe charges from our API, your code may be expecting them all to contain a 'card' property that looks like a card. By making old API versions explicitly declare that they want non-card charges in that list (via a GET param), we are able to provide a solution that doesn't break current code. Additionally, this means that upgrading is as simple as first adding that flag to your call sites and testing, then once that change is live, you can upgrade your API version which will basically be a no-op (since you are already getting all the objects). Later, you can choose to remove the (now) extra query param.

Overall, the changes to the API ended up be fairly minimal and completely non-breaking. This is awesome because it means a much lower learning curve for the new features. They also just feel much more a part of the Stripe API.”

—Ray Morgan, Software Engineer at Stripe

6. SendGrid: Anticipate Different API Use Cases

At the core of SendGrid, the world’s largest email infrastructure provider, is an internal and customer-facing API with hundreds of endpoints. We spoke with Matt Bernier about how SendGrid provides support for its API from all angles:

“In my role, I have my hands in all the documentation, development of API libraries and API testing, which means I have three different entry points into our API. This allows me to see the larger picture from the top down and make sure that the API is actually consistent in how it’s presented. It also lets me think about what people are actually going to do with our API.

Working on testing means I can test the inputs and outputs, but also think about how people are using the API differently than what we planned for or built a particular feature for. Using the API in different ways lets me make sure the API works for different customers with different use cases.”

—Matt Bernier, Developer Experience Product Manager at SendGrid

7. Clarify: Approach Hypermedia Wisely

Clarify takes a new approach to transcription with an API for processing audio and video files, the results of which are then easily searchable and can produce reports. Hypermedia is a key approach at Clarify, and we chatted with Keith Casey to learn more:

“As we were expanding the API this year, we went deeper on our hypermedia approach. We choose to go all in on Hypermedia because we don’t know what’s next. Literally.

Our systems are machine learning-based and we analyze and learn things about our customers and their data constantly. Some are specifically things we set out to learn and others are accidental, but valuable, discoveries that help us detect and solve their problems better. For example, in analyzing sales calls to determine which calls resulted in a sale, we’ve discovered a number of patterns that go along with successful and failed sales attempts which then informs our customers’ sales training.

It’s important to remember that part of the underlying goal of REST is to have shared language to promote understanding. Therefore, the most important aspect of hypermedia is making sure your language is as consistent and explicit as possible. Don’t make up terms because they seem cute or applicable within your team. If there’s an industry-common word, use that instead.”

—Keith Casey, Director of Product at Clarify

These tips cover several different aspects of developer experience and maintaining high quality APIs. One of our best tips is to use API monitoring to ensure that if and when anything does go wrong—an endpoint goes down, there's latency or the API is returning the wrong data—you're the first to know and can fix the problem before customers notice. You can sign up for Runscope for free and get started in minutes monitoring the APIs you and your customers depend on most. 

Are there any APIs not on this list that stand out to you? What are some of your best tips for maintaining healthy APIs and a solid developer experience? Let us know in the comments or shoot us an email, we'd love to hear from you! 

View Your Most Important API Performance Metrics in One Place with the API Test Dashboard

$
0
0
dashboard2.jpg

When testing and monitoring APIs, there’s a lot of information to consider outside of just “is my API up or down at this moment.” That’s why we’ve updated the view of your API tests to provide a comprehensive picture of API health. We’re excited to introduce the API Test Dashboard—an at-a-glance view of key API performance metrics for all of your API tests, in one place.

The API Test Dashboard provides a top-down view of key API performance metrics and allows you to interact with your tests at the ground level to quickly catch and debug apparent and intermittent API problems fast. The dashboard provides the same metrics as your daily API Performance Report, including the success rate and average response time over a period of time (1 hour, 1 day or 30 days).

When you sign in to Runscope, navigate to Tests in the top navigation and you’ll see your API Test Dashboard right away. Every test has its own test summary card, which can be sorted to suit your preferences, either by date, last run or failures first. Each dashboard is unique to the user, so everyone on a team can have their own individual test view.

From each test summary card, you can:

  • See the success rate and average response time of the API test over time
  • Run the test from multiple global locations and environments
  • Edit, schedule and duplicate the test
  • View when and from where your last 40 test runs passed or failed 

The Runscope API Test Dashboard gives you the tools to proactively monitor trends around your API test results and service performance so you can stay ahead of even intermittent problems that may cause serious issues for your end-users. Try out the dashboard today by signing up for Runscope for free—during the trial, you’ll get the total package of API monitoring, testing and debugging so you can prevent, identify and solve API problems fast.


Quickstart Guide to the Runscope API

$
0
0

This is part one of a series of blog posts about how to use the Runscope API. In this article you’ll learn how to get started using the API in less than 5 minutes, from creating your first access token to fetching the tests in your bucket.

The Runscope API provides access to the data within your Runscope account. This includes access to read, create, modify and delete nearly every type of resource — from tests, environments, schedules, test results, request logs to buckets. The API makes Runscope incredibly pliable, helping Runscope to fit into how your team develops, tests and deploys software.

We know that getting started with new APIs can sometimes be daunting. But rest assured, if you're even lightly familiar with Runscope, you'll be making API calls in five minutes or less -- all from within Runscope, no tools or coding required.

Step 1: Create an app and access token

Our API uses the OAuth 2.0 protocol for authentication. If you've ever used APIs from GitHub, Twitter, Facebook or Google, then you're already familiar with this first step -- you need create an Application. When an Application is created, a personal access token that's authorized to access data in your account is conveniently generated. Using this token you can begin making API calls immediately without having to step through the account authorization flow.

 

Step 2: Start with your bucket list

All of your tests, environments and logged API traffic are organized into Buckets. Every API method related to managing API tests and logged traffic requires a bucket key. Therefore, the first API call to try is Bucket List method. When you make a GET call to the /buckets endpoint, a list of buckets is returned in the data array. 

In order to make an authenticated request to the Runscope API, the access token is passed in the Authorization header. The auth header is pre-populated when you click Try it in the Request Editor button.

Note: bucket keys are unique resource IDs. Bucket key values do not change after a bucket is created and are safe to cache. Bucket names, however, are not guaranteed to be unique (you can have two buckets with the same name).

Step 3: Fetch your tests

Next, try the Test List method. This method lists all of the tests that reside in the specified bucket. Similar to the Bucket List method, a list of tests is returned in the data array. In the screencast below, we’re listing out all of our tests, and then making another call to fetch a specific test using the Test Detail method.

The amount of information returned may at first seem a bit overwhelming; however, if you look closely, you’ll find that it’s rather easy to understand with object key names that intuitively map back to the Runscope UI.

Exploring other methods and resources

Use the List Buckets and List Tests methods above to familiarize yourself with the JSON representation of the Runscope data that you regularly manage in the dashboard UI. Now that you know how easy it is to get started, we encourage you to try out other methods and resources. While you're still learning how to use the API and trying out new methods, we suggest that you create a new bucket that acts as a sandbox, or stick with read-only (GET) methods if you're making calls against production-level buckets/data.

In the follow up articles in this series, we'll dig deeper into the anatomy of the Test Resource (steps, assertions, environments, etc.) as well as how to create new and modify existing tests. Soon enough, you'll feel as comfortable working with Runscope through the API as you do through the UI. Meanwhile, if you need any help navigating the API, check out the API Documentation or contact our Support team.

Original photo by Oscar Rethwill under CC License.

Migrating to DynamoDB, Part 1: Lessons in Schema Design

$
0
0

This post is the first in a two-part series about migrating to DynamoDB by Runscope Engineer Garrett Heel (see Part 2). You can also catch Principal Infrastructure Engineer Ryan Park at the AWS Pop-up Loft on January 26 to learn more about our migration. Note: This event has passed.

At Runscope, we have a small but mighty DevOps team of three, so we’re constantly looking at better ways to manage and support our ever growing infrastructure requirements. We rely on several AWS products to achieve this and we recently finished a large migration over to DynamoDB. During this process we made a few missteps and learnt a bunch of useful lessons that we hope will help you and others in a similar position.

Outgrowing Our Old Database

Our customers use Runscope to run a wide variety of API tests: on local dev environments, private APIs, public APIs and third-party APIs from all over the world. Every time an API test is run, we store the results of those tests in a database. Customers can then review the logs and debug API problems or share results with other team members or stakeholders.

When we first launched API tests at Runscope two years ago, we stored the results of these tests in a PostgreSQL database that we managed on EC2. It didn’t take long for scaling issues to arise as usage grew heavily, with many tests being run on a by-the-minute schedule generating millions of test runs. We considered a few alternatives, such as HBase, but ended up choosing DynamoDB since it was a good fit for the workload and we’d already had some operational experience with it.

Migrating Data

The initial migration to DynamoDB involved a few tables, but we’ll focus on one in particular which holds test results. Take, for instance, a “Login & Checkout” test which makes a few HTTP calls and verifies the response content and status code of each. Every time a run of this test is triggered, we store data about the overall result - the status, timestamp, pass/fail, etc.

Example Test Results table in DynamoDB.

Example Test Results table in DynamoDB.

For this table, test_id and result_id were chosen as the partition key and range key respectively. From the DynamoDB documentation:

To achieve the full amount of request throughput you have provisioned for a table, keep your workload spread evenly across the partition key values.

We realized that our partition key wasn’t perfect for maximizing throughput but it gave us some indexing for free. We also had a somewhat idealistic view of DynamoDB being some magical technology that could “scale infinitely”. Besides, we weren’t having any issues initially, so no big deal right?

Challenges with Moving to DynamoDB

Over time, a few things not-so-unusual things compounded to cause us grief.  

1. Product changes

First, some quick background: a Runscope API test can be scheduled to run up to once per minute and we do a small fixed number of writes for each. Additionally, these can be configured to run from up to 12 locations simultaneously. So the number of writes each run, within a small timeframe, is: 

<number_of_locations> × <fixed>

Shortly after our migration to DynamoDB, we released a new feature named Test Environments. This made it much easier to run a test with different/reusable sets of configuration (i.e local/test/production). This had a great response in that customers were condensing their tests and running more now that they were easier to configure.

Unfortunately this also had the impact of further amplifying the writes going to a single partition key since there are less tests (on average) being run more often. Our equation grew to

<number_of_locations> × <number_of_environments> × <fixed>

2. Partitions

Today we have about 400GB of data in this table (excluding indexes), which continues to grow rapidly. We’re also up over 400% on test runs since the original migration. Due to the table size alone, we estimate having grown from around 16 to 64 partitions (note that determining this is not an exact science).

So let’s recap:

  • Each write for a test run is guaranteed to go to the same partition, due to our partition key
  • The number of partitions has increased significantly
  • Some tests are run far more frequently than others

Discovering a Solution

After examining the throttled requests by sending them to Runscope with a Boto plugin we built, the issue became clear. We were writing to some partitions far more frequently than others due to our schema design, causing a temperamentally imbalanced distribution of writes. This is commonly referred to as the "hot partition" problem and resulted in us getting throttled. A lot.

What partitions in DynamoDB look like after imbalanced writes.

What partitions in DynamoDB look like after imbalanced writes.

Effects of the "hot partition" problem in DynamoDB.


Effects of the "hot partition" problem in DynamoDB.

One might say, “That’s easily fixed, just increase the write throughput!” The fact that we can do this quickly is one of the big upshots of using DynamoDB, and it’s something that we did use liberally to get us out of a jam.

The thing to keep in mind here is that any additional throughput is evenly distributed amongst every partition. We were steadily doing 300 writes/second but needed to provision for 2,000 in order to give a few hot partitions just 25 extra writes/second - and we still saw throttling. This is not a long term solution and quickly becomes very expensive.

A DynamoDB table with 100 read & write capacity and 4 partitions. As the number of partitions grow, the throughput each receives is diluted.

A DynamoDB table with 100 read & write capacity and 4 partitions. As the number of partitions grow, the throughput each receives is diluted.

It didn’t take us long to figure out that using the result_id as the partition key was the correct long-term solution. This would afford us truly distributed writes to the table at the expense of a little extra index work.

Balanced writes — a solution to the hot partition problem.

Balanced writes — a solution to the hot partition problem.

Learn More In Person & On the Blog

If you’re interested in learning more about our migration to DynamoDB and hearing from our DevOps team in person, Runscope Principal Infrastructure Engineer Ryan Park will be at the AWS Pop-up Loft in San Francisco on Tuesday, January 26 presenting Behind the Scenes with Runscope—Moving to DynamoDB: Practical Lessons Learned. Make sure to reserve your spot! [Note: This event has passed.]

In Part 2 of our journey migrating to DynamoDB, we’ll talk about how we actually changed the partition key (hint: it involves another migration) and our experiences with, and the limitations of, Global Secondary Indexes. If you have any questions about what you've read so far, feel free to ask in the comments section below and we're happy to answer them.


If you're new to API testing, learn more about Runscope's cloud-based API testing solution and sign up for your free trial account.


3 Ways Subtests Can Improve Your API Monitoring and Testing

$
0
0

Have you ever wished you could reuse a set of tests and avoid that pesky duplication? Well, with the addition of Subtest Steps, now you can and it's as easy as pie!

Subtest Steps

For starters, let's take a look at the most common use cases for using subtests:

  • Reuse Common Functionality

Need to generate a refresh token on each test run? Now you can create a test to generate a token and use that test as a step in other tests. When the token generation process changes, you only need to update it in one place.

  • Setup/Teardown Steps

If your tests need to set up data before running, or clean up after themselves, subtests can group the required API calls for setup and teardown into a single, reusable step.

  • Organize Your Tests into Suites

Running a set of related tests as a group is now possible. Create suites of related tests and run them all together as a unit, with notifications for the result of the whole suite.

You can find the Subtest Step option in your Test Editor under "+Add Step":

 

 

And this is what it will look like:

 

 

You can specify the test's Bucket, Name, and Environment to be used as a subtest, and you can position it anywhere in your Steps workflow.

Subtests are a powerful tool to help you create a more robust set of tests, while also simplifying your workflow. To make things even better, you can also pass dynamic data from other steps to subtests, and extract data from the result to use in subsequent steps.

In the next weeks, we'll be showing a few use cases for how you can use subtests to improve your API tests workflow. If there's anything specific you would like to see, please let us know by reaching out to us on Twitter @Runscope, or email us at help@runscope.com.


If you're new to API testing, learn more about Runscope's cloud-based API testing solution and sign up for your free trial account.


Global API Monitoring: Analyze Your APIs Response Time from 16 Distributed Locations

$
0
0

When your API is a core part of your business, one of the biggest challenges is ensuring that all your customers have a fast response time. Nowadays, even small businesses might have a distributed, global customer base, and you don't want them suffering from a slow and unresponsive experience.

It's crucial that your API is just as fast from your office, as it is from across the globe.

The year is almost over, but we're still working hard here at Runscope. We've just added two new locations that you can use to test and monitor your API: Canada (Central) and London.

Global API Monitoring

That's why we have you covered, now with 16 global locations that you can use to test and monitor your APIs. More recently, we've just added Canada (Central) and London to the list. The full list includes:

  • US Virginia
  • US California
  • US Illinois
  • US Texas
  • US Oregon
  • Ireland
  • Germany (Frankfurt)
  • Australia (Sydney)
  • Singapore
  • Hong Kong
  • Japan (Tokyo)
  • Brazil (São Paulo)
  • India (Mumbai)
  • US Ohio
  • Canada (Central)
  • London
     

How to Add Locations to Your Tests

Activating new locations for your tests is really easy. Just open your API tests Editor, click on Environment Settings, select Locations on the left-hand side, and toggle the locations you wish to use:

 

 

Your API Performance Report email will then show response times and success rates compared to the previous day for every location. This will help you stay ahead of any latency issues that are too small to cause a test failure, and pinpoint any problems even before your customers do.

We also want to wish everyone happy holidays, and we look forward to another 200 OK year!


To get a complete picture of your API’s performance, from almost every corner of the world, sign up for your free Runscope trial account today and begin monitoring globally—it’s just one click away.


Related Resources

New Year's Resolution: How (and Why) to Give Back to the Developer Community

$
0
0

A new year has arrived, and that usually means new resolutions for most of us. One resolution that has been on my mind for 2017 is a simple question: how can I give back to the developer community this year?

Having worked as a developer, developer evangelist, and content writer in the past few years, I have gotten a lot of help from a lot of different sources: discussions with friends and work colleagues; talks and workshops at meetups and conferences; online communities like Stack Overflow, Hacker News, Slack groups, Facebook groups, etc.

But, consuming that knowledge is often easier than contributing to it. For example, it probably took me a good couple years before I started interacting and upvoting on the Stack Overflow community. It took even longer for me to be comfortable writing and sharing my first blog post.

So, if you made a similar resolution to give back to the developer community this year and need some help on how to get started, we're going to cover a few different ways that you can do that. First, we're going to talk about online communities: GitHub, Stack Overflow, social media, and writing. Then we'll move to offline communities and talk about meetups, conferences, and public speaking.

Every section will include some general advice, and also some more in-depth links. But before we get to the actionable advice, let's talk a little bit about why we should give back.

Why Give Back?

"Wikimedia Hackathon 2013, Amsterdam" by Sebastiaan ter Burg is licensed under CC BY 2.0 / Cropped from original

"Wikimedia Hackathon 2013, Amsterdam" by Sebastiaan ter Burg is licensed under CC BY 2.0 / Cropped from original

There are plenty of ways to go about giving back, but before we get to that, maybe you're asking yourself: why should I give back? Well, helping others can make us happy (and some might say that's the secret of happiness). There's a Chinese saying that goes: “If you want happiness for an hour, take a nap. If you want happiness for a day, go fishing. If you want happiness for a year, inherit a fortune. If you want happiness for a lifetime, help somebody.” If you're more skeptical, there are actual scientific studies to back up how helping others positively impacts your life.

Besides happiness and meaningfulness, giving a helping hand can also boomerang back into your life in a variety of positive ways. Helping open-source projects, for example, could lead you to meet other developers who might be your future co-workers. Or speaking at a conference might help you cross an item off your bucket list, and help you give better presentations in the future.

So, let's get started!

Online Communities

Stack Overflow

I can't count how many times Stack Overflow has helped me solve some horrible bugs, or how to center a div with CSS. And it's really easy to give back here:

  • Nothing like a website's own guide, so definitely check out Stack Overflow's Help Center. That includes a good collection of guidelines and suggestions, from what topics you can ask about, to how their reputation system works.
  • Be nice.
  • If you find a question and answer that helps you solve your problem, make sure to upvote both the answer AND the question. That can help other people that run into the same issue more easily find it. 
  • It's also common for answers to get outdated. If you see that, don't be afraid to suggest an edit to an answer, or write your own. Maybe a new version of a Node.js library came out that changed a method's signature, or maybe you just found a better way to center a div with CSS. 
  • It's okay to answer your own questions! If you couldn't find a solution on the website and want to document it for other people, that's encouraged. If you have a little bit of reputation, you can even post a question with an answer to it at the same time.

Slack Groups

The popular work chat app is not only good for use as part of a company. A lot of developer communities have embraced the platform and created public Slack channels that you can join, in favor of the usual Google Groups. For example, you can find specific communities focused on one language like Clojure, to more broad ones like DevChat.

There's no official Slack communities directory, but here are two that I found that include some popular developer communities:

You can also create your own Slack group. Besides groups for specific languages, another option is to create a group for your local community, city, or even country. Buffer has a great blog post about starting your own group that covers why you should do it, how to promote it, moderate, and even measure success with KPIs if you're into it.

Twitter

One of the best ways to keep up-to-date with news is Twitter. But building a good follower list can be a challenge. That's why Twitter lists can give you a big hand when you're trying to follow a specific topic. They're a curated list of folks that others (or you) can create, and it makes it really simple to follow a specific topic, be it a programming language, to a list of people attending a conference.

There's no way to search for lists right from Twitter, but you can do a Google Search filtering by site, and it should give you back a few good results.

Search query: site:twitter.com js developers list

Search query: site:twitter.com js developers list

Some conferences also make Twitter lists for speakers or attendants, and that's usually a good way to start interacting online and meeting people even before the event starts.

Writing

Chris Wolfgang from Codeship wrote a blog post last year on "Why Developers Should Write", and I highly recommend it. I'm biased here, but I really believe writing is one of the best ways to give back to the community.

This doesn't mean you have to start your own blog and write a 3000-word blog post every single day. There are better (and saner) ways to help your fellow developers with writing

  • As I mentioned before, writing question and answers on Stack Overflow is awesome.
  • If you don't want to start your own blog, another option you have is writing guest posts for other companies. Some will even pay you for it, but before you start writing make sure to reach out to these companies first. They might be only interested in certain topics, and sometimes they might even help you brainstorm some ideas for you to write about.
  • If you'd like to start your own blog, that's awesome! There are a few ways you can go about, like hosting your own with a platform like Wordpress or Ghost. My favorite though is writing on Medium. It's a pleasure to write (and read) on it, it's super easy to get started, and it gives you some help in reaching readers with the Facebook/Twitter connections, and their newsletters.
  • Besides having your personal blog on Medium, you can also contribute to publications there. A popular one for JavaScript is JavaScript Scene, and if you write something interesting you can submit your post to published in it.

Another great post/talk on this is Kristina Thai's "Become a Better Engineer Through Writing". She gives a great overview of the benefits and challenges of different forms of writing you can do (journal vs. Q&A forums vs. blogging vs. tutorials). She also shares her journey on starting her own blog, from getting a couple visits a day to being invited to speak at conferences all over the world. And the best advice I think she gives for getting started, you do not have to be the expert. So, don't be afraid of writing even a really tiny blog post about an issue you just solved, or a new open-source project you started.

Facebook Groups & Quora

I'm just going to mention these two: Facebook Groups are still a good way to participate in developer communities, and Quora can sometimes be an underrated source of knowledge for developer questions, but there's some good content in there as well.

GitHub

As a developer, this might be the place where you spend most of your time next to your IDE. You can find the most popular open-source projects hosted on GitHub, and it may also be where your company keeps its private repositories.

  • If you're new to open-source, check out GitHub's guide on Contributing to Open-source, and Erika Heidi's post on A Beginner's Guide to Open Source: The Best Advice for Making your First Contribution.
  • Nowadays, it is really common for even the smallest project to use some sort of open-source framework or library. These are maintained by people, most of them in their free-time. So, if you find a bug, go to the project's repository and see if there's an open issue for it already. If not, open one and try to be as descriptive as possible. Maybe give it a try in fixing it. Reading other people's code is a great way to level up as a developer too.
  • You can also find events that encourage people to start contributing to open-source like Digital Ocean's Hacktoberfest, which happens every year in October. If you scroll down to the bottom and look at the "Resources" section, they also link to some excellent guides about how to contribute.
  • A sort of pattern that some projects that need help use is adding tags to certain issues named "help wanted". That makes it super easy to search GitHub for issues that you can try to help. You can do a GitHub search for state:open label:"help wanted", filter by language, and find projects you can contribute to.
  • Open-sourcing your projects is another great way to help the community. But, as they say, with great power comes great responsibilities. So, sharing your code for other people to use is great, but remember to also include a license with it. That way people can know whether they are free to use it, change it, or distribute it. GitHub even made this easier by creating a website appropriately called choosealicense.com.

Offline Communities

Meetups

This is one of the best ways to give back, and also reap some huge rewards. As humans, social interaction is proven to have all sort of positive effects in our lives. Meetups can help you meet some great people, and also pick up some new skills.

Meetup.com is the biggest platform for organizing and finding these events. And, especially if you live in a big city, it's easy to find developer meetups by creating an account there and searching for your favorite programming languages.

If you're already part of a few meetups, know that most organizers can always use a helping hand. Putting events together takes a tremendous amount of work, so even something as small as inviting your friends, making sure to attend after you RSVP, interacting with other people, and taking pictures and posting them on social media, every little gesture is appreciated.

And if you want to do even more, start your own meetup! Maybe your city doesn't have a JavaScript meetup yet, or you want to meet more people that are working with Docker. As I mentioned before, putting together an event is a tremendous amount of work, but there are a few resources that can help you.

Buffer has another great post, this one titled "How to Host a Meetup For Your Community: The Who, What, Where, When, And How". I especially like the checklist they have at the end of it.

A great option for starting a meetup, but getting a head start with a little more support and resources, is to look for organizations such as CoderDojo.

CoderDojo is a "worldwide movement of free, volunteer-led, community-based programming clubs for young people." Greg Bulmash is the Lead Organizer for Seattle's CoderDojo, and he has been doing it even before he became a technical evangelist at Amazon. He says "Besides helping prime the pump for the next generation of developers, it helps a lot of public spirited grown-up devs get a chance to build our mentoring skills, connect with peers (fellow volunteers), and get the good karma and happy brain chemistry that comes from seeing a kid light up because they grasped a new concept."

There are also some great organizations out there who are making a difference by empowering women, and making the developer community a more diverse field. For example, RailsGirls, a non-profit volunteer community, aims to "give tools and a community for women to understand technology and to build their ideas".

Helping organize one of these events that have the backing of an organization are usually a little bit easier. They already have experience and resources that you can use, and a community of experienced people to support you.

Here are a few other organizations that you might be able to help as an organizer or volunteer:

Conferences

Similar to meetups, conferences have much of the same benefits but in a bigger scale. You can meet even more people, listen to and learn from some great speakers, and maybe even do some workshops to polish your skills.

And also similar to meetups, giving a helping hand during conferences can also come in a multitude of ways:

  • Small things like taking pictures during the conferences and sharing on social media are always helpful. Triple bonus if it's a picture of a speaker that includes a thank you for a talk that you really liked.
  • A lot of conferences need volunteers, from people to help during the registration process to assisting speakers set up their talks. So if you have some free time, don't be afraid to reach out to organizers and offer to help. That may also help you save some money, and if you're an introvert, it's a great way to help you meet new people.
  • If you consider yourself more of an introvert, Adam Duvander from Zapier wrote a great blog post on "How to Attend Conferences as an Introvert", so definitely check that out. 

Brown Bag Lunch

A cool practice that some companies put together is having "brown bag lunches". That usually involves an office lunch, and a talk by someone from the company or an external speaker.

Brown Bag lunches are great ways to disseminate knowledge across the company, especially when you have multiple teams working separately. It's also a great space to talk about new technologies or personal projects, and even practice a talk that you might be preparing for a meetup or conference.

Giving Talks

Giving talks could be its own separate blog post, so I just wanted to share some resources and general tips if you're thinking about doing it:

  • It might be tempting to just go and try to give a talk at a conference, but speaking to a huge crowd can be really intimidating. You might want to work your way up the chain by first practicing giving a talk to friends or colleagues (internal talks at companies are great). You can then start giving talks at meetups, who usually need more speakers every month. And then, finally, start presenting at conferences.
  • Don't be afraid to reuse talks, especially if they're well received. Practice makes perfect, and that's the same way with presentations.
  • Having great slides can make or break a presentation, so do give some thought to them. And by all means, avoid walls of text. Chris Heilmann gives some great advice here on "Prepare great slide decks for presentations".
  • TED talks are really popular, and they even have a book about public speaking! It's called The Official TED Guide to Public Speaking by Chris Anderson, and it can truly change the way you approach giving talks.
  • Bonus: if you have some extra time, check out Tim Urban's post (Wait But Why) on his experience doing a TED talk.

Do What Works Best for You

Giving back and helping others is great, and these are just a few ways to do it. It can help you achieve that New Year's resolution, give you a sense of meaningfulness, and even include a few extra benefits like making new friends, landing a new job, or learning a new skill.

If you would like to talk more about any of the topics mentioned here, please reach out to me on Twitter, and I'll be happy to help. :)


If you're new to API monitoring and testing, learn more about Runscope's cloud-based API testing solution and sign up for your free trial account.


Retry on Failure + Threshold Alerts: Optimize Your Team’s Monitoring Notifications

$
0
0
"Cabled" by Alan Levine is licensed under CC BY 2.0 / Cropped from original

"Cabled" by Alan Levine is licensed under CC BY 2.0 / Cropped from original

Have you ever been notified of an API monitoring error late at night, only to find out that it was a temporary network error? Or do you have false positives in your API monitoring data that are affecting your customer reports?

Timeout errors can affect any API, maybe because of a temporary bad internet connection, or a complex database operation. And a lot of times, a simple re-run might return a 200 response. That's why we recently added a new feature called Retry on Failure.

How to Use Retry on Failure

Retry on Failure will automatically re-run a single test run, immediately after a test fails. Failed re-runs will not re-run again.

You can find this option in your test environment settings, under the Behaviors menu. It's set to off by default, and it's available for both test-specific or shared environments.

ss-retry-on-failure.jpg

If you know that the API you're monitoring suffers from intermittent issues, and that it is not part of a bigger problem, definitely give this new feature a try. As we mentioned before, this can help you have a clearer picture of your API success/failure data, and get rid of false positives.

Another use case for this is if you're monitoring your tests using a longer timeframe, such as 15 minutes or an hour. With retries active, you can get more accurate data, without waiting for a test to be re-run only after a long period.

As a best practice, retry on failure works especially well if combined with threshold notifications. It will help you get to thresholds more quickly, so make sure to review your email and 3rd-party notification settings after activating it.

ss-threshold-notifications.jpg

Questions and Feedback

Retry on failure can reduce the number of notifications sent to your team because of temporary network errors, and give you more accurate data about your APIs.

If you need any help or run into any issues, please check out our support page or reach out to our awesome support team at help@runscope.com.


If you're new to API monitoring and testing, learn more about Runscope's cloud-based API testing solution and sign up for your free trial account.


Related Resources

Creating a Setup Phase with Subtests: How to Handle OAuth 2 Access Tokens using the Runscope API

$
0
0

In a recent post, we announced a new feature called Subtest Step. Today, we're going to be looking at one of the most common use cases for it: creating a "Setup" phase to always refresh your access token. With this, you can easily monitor and test any API that implements OAuth 2, without having to worry if your token is valid or not.

A few things before we start:

  • This tutorial is going to focus on how to handle Access and Refresh Tokens inside of Runscope. You should be familiar with OAuth 2, as we won't be going to deep into it.
  • This should work for any API that implements OAuth 2.
  • If you need to quickly go through an Authorization Code Grant Flow to get your first access/refresh tokens, check out our OAuth Token Generator tool.
  • To use the Runscope API, we’re going to need a Runscope access token. The API uses OAuth 2.0, so you'll need to create an application from your profile Applications page. That will provide you with a Personal Access Token which can be used as an OAuth bearer token to access data in your account. If you need help creating an application, please check our blog post "Quickstart Guide to the Runscope API."
  • Please be careful when making changes to a live environment. Make sure to thoroughly test the steps below in a separate environment or bucket before committing them to your production tests. 

OAuth 2 Access and Refresh Tokens

So, let's say you went through an API's Authorization Code Grant Flow, and got your access and refresh tokens. You're ready to start creating tests for your API and monitoring it. But, before each test, you want to create a setup phase to get a new access token. How can we do that?

Well, three easy steps:

  1. Create a Shared Environment with your tokens
  2. Create a new test that will:
    • Get new tokens
    • Update our Shared Environment using the Runscope API
  3. Add our test from the previous step as a Subtest in any other test

Let's dive right in!

Setting up a Shared Environment with an Access Token

The first thing we should do is create a Shared Environment for our tests and add two variables for our tokens so that they are easily accessible:

Create shared environment and set up variables

Create shared environment and set up variables

Also make sure to copy our newly created shared environment UUID, which we will use in the next steps:

2-shared-env-uuid.png

Creating a New Test and Getting a New Access Token

Now, we want to make sure we always get a new Access Token before each test as our setup phase. Let's create a new test to do that.

The following steps are just an example placeholder. Every API is different, so this is where you will update your request with the necessary information for your API: the request endpoint that will return a new access and refresh tokens, any authorization headers, and any variables you might need.

For this tutorial, the only thing we want to make sure to do this for this step is to extract our new access and refresh tokens from the API response. We're storing them in the variables newAccessToken and newRefreshToken.

3-create-new-test-set.gif

Updating your Shared Environments Variables with the Runscope API

Next, we are going to add two request steps to our test. Both requests are going to hit the same endpoint with two different HTTP methods:

https://api.runscope.com/buckets/<bucket_key>/environments/<environment_id>

The first thing we need to do is GET our shared environment settings and extract the JSON result.

You can see the endpoint includes a <bucket_key> and <environment_id> variables. We can replace the first with one of the built-in variables in the Test Editor: {{runscope_bucket}}. That will automatically replace it with your test's bucket key.

The second one, <environment_id>, can be replaced by the UUID from the shared environment you created at the beginning of the tutorial. So you can directly paste that at the end of the URL, or store it in an initial variable and use that instead.

We also need to use our Runscope API personal access token to access this endpoint. So, if you're familiar with OAuth 2, all you need to do is add an Authorization header to our request with the format "Bearer your_runscope_api_key_here".

Getting the Shared Environment Settings

4-get-shared-env.gif

Remember to create a variable to extract and store the JSON result. That will include all the current settings for our shared environment.

Updating and Saving the Shared Environment Settings

For this next step, we're going to start by duplicating our last request. Then, all we need to do is change our method to PUT instead of GET and add a small pre-request script to our test:

var envSettings = JSON.parse(variables.get("envSettings"));
envSettings.initial_variables.accessToken = variables.get("newAccessToken");
envSettings.initial_variables.refreshToken = variables.get("newRefreshToken");
request.body = JSON.stringify(envSettings);

The script will do a few things:

  • Retrieve the variable envSettings from the last step, which includes the JSON result of our shared environment settings.
  • Update our shared environment variables to include our new tokens.
  • Set our new shared environment settings to the request body, so it gets saved when we make our PUT request.
5-update-shared-env.gif

Using the "Get Refresh Token" as a Subtest

Now to the fun part! We're ready to make authenticated requests to our API, and we can create as many API tests as we want and just reuse our Authentication Test as a Subtest. :)

6-use-subtest.gif

Bonus

Sam, our awesome Customer Success Engineer here at Runscope, created a template for the steps above. So, if you know what you're doing and just want a head start, download this JSON and import it as "Runscope API Tests." To give it a try, all you need to do is create a shared environment and add two initial variables to the imported test:

  • runscope_token - Your Runscope API token.
  • shared_environment - Your shared environment UUID.

Conclusion

This is just one of the use cases for Subtest Steps, and we hope you find it useful! In some cases, you might want to check if your access token is expired before getting a new one, or only get one after a set period of time. If that's something you'd like to see us write about, or need help setting up, please let us know at help@runscope.com.

In our next post about Subtests, we'll cover how you can use them to better organize your tests.


Need to monitor and test an API that implements OAuth 2? Learn more about Runscope's cloud-based API testing solution and sign up for your free trial account.


Related Resources


Working with the Azure Storage REST API in automated testing

$
0
0
Authenticom's Logo

Authenticom's Logo

This is a guest post by Chris Kirby, Director of Technology at Authenticom. He's a big fan of automating processes, beer, and games. You can follow him on Twitter, GitHub, and his blog.

If you're interested in being published in our blog, reach out to us at help@runscope.com.


Testing! Every developer's favorite topic :). For me, if I can save time through automation, them I'm interested. Automated testing for a developer typically starts with unit tests, which even if you don't subscribe to TDD, you've written at least one of them just to see what all the fuss was about. Like me, I'm sure you saw that testing complex logic at build time has huge advantages in terms of quality and taking risks. However, even with the most comprehensive tests at 100% coverage, you've still got more work to do on your journey towards a bug-free existence.

Given that most modern applications rely on a wide variety of cloud platform services, testing can't stop with the fakes and mocks...good integration testing is where it's at to get you the rest of the way. Integration testing is nothing new of course, it's just more complicated today than it was even a few years ago. A tester's job is not only to test with your application's custom interface and data, but to also test your interaction/integration with the dozens of 3rd party services and SaaS providers.

Thankfully, we've agreed on a common language...where there is a service, there is a REST API. This post demonstrates how to work with Azure Storage, free of the SDK, in a test environment like Runscope or Postman.

Authorization

By far the worst part of working with this particular API is getting through the 403. The docs are comprehensive in terms of what you have to do, but they are very light on how to do them. With that said, There are two primary ways to accomplish this. You can build a custom Authorization header, or you can generate a Secure Access Signature (SAS) and pass that via query string. In the following I cover both approaches, however I highly recommend using a SAS for simplicity.

Generating and using a Shared Key Authorization header

The short of it, is that you piece together a custom signature string, sign it with the HMAC-SHA256 algorithm using your primary/secondary storage account key, and BASE64 encode the result. If this sounds complicated, it is. Here is the full dump on SharedKey authorization from the Azure docs. The following is an example generation script and how you could go about using it in Runscope, my favorite tool for testing API's.

var storageAccount = "myStorageAccountName";
var accountKey = "12345678910-primaryStorageAccountKey";
var date = moment().format("ddd, DD MMM YYYY HH:mm:ss");
var data = date + "\n" +
    "/" + storageAccount + " /myTable"
// utf-8 encoding
var encodedData = unescape(encodeURIComponent(data));
// encrypt with your key
var hash = CryptoJS.HmacSHA256(encodedData, accountKey);
var signature = hash.toString(CryptoJS.enc.Base64);
// build the auth header    
var auth = "SharedKeyLite " + storageAccount + ":" + signature;
// show the full header
$("#output").html("Authorization: " + auth);
The pre-request script example in the Runscope test editor interface

The pre-request script example in the Runscope test editor interface

Import the test directly into Runscope: https://gist.github.com/sirkirby/389a289f55e8160efcbeef99e1d33db4

Generating and using a SAS

Did I mention this was easier? There are a couple of common ways to generate one outside of using code and the SDK. The most obvious is using the Azure Portal. Just navigate to your storage account blade and look for the Shared access signature option on the left menu. The other option is to generate one using the awesome Azure Storage Explorer tool.

Once in and authenticated, tree down to the account or resource you want to access and use the context menu to generate the signature. If you want tight control over security, I would suggest using Storage Explorer given that it has an interface for generating signatures on specific tables, containers, and queues. The Portal, on the other hand, only has an interface for the account level signature (at the time of writing). Now that I have my SAS, here is what is looks like in Runscope:

Request setup with variables and querystring in the Runscope test editor

Request setup with variables and querystring in the Runscope test editor

Import the test directly into Runscope: https://gist.github.com/sirkirby/72cdbeac7f8273da955b3e3784ab7083

Wrap up

Now that you're getting a 200, you can move on to writing your assertions. By default, the ODATA response you'll get back is in the Atom XML format, which makes writing your Javascript assertions more difficult. To get the result in JSON, be sure to add an Accept request header with the value application/json;odata=fullmetadata.

Happy testing!


If you're new to API monitoring and testing, learn more about Runscope's cloud-based API testing solution and sign up for your free trial account today.


6 Common API Errors

$
0
0
forks.jpg

Have you ever used an API that returned an HTML error page instead of the JSON you expected, causing your code to blow up? What about receiving a 200 OK status code with a cryptic error message in your response?

Building an API can be as quick as serving fast food. Frameworks like Express, Flask, and Sinatra combined with Heroku or zeit's now help any developer have an API up and running in a few minutes.

However, building a truly secure, sturdy, hearty API, can take a little more work, just as a chef takes more time when crafting a great meal. You need great docs, clear and concise error messages, and to meet developers' expectations of how your API should work.

On the other side of the table, we have developers interacting with these APIs. And we, as developers, sometimes make mistakes. We can make false assumptions about how an endpoint should work, not read the docs closely enough, or just not have enough coffee that morning to parse an error message. That's where Runscope comes in.

Our testing and monitoring tools can help you uncover issues that would otherwise stay hidden by a lack of integration tests, or real-world use case scenarios. Working with thousands of developers to resolve their API problems has given us unique insight into issues they often see when integrating and interacting with APIs.

Here's our list of 6 common mistakes that can catch you off guard, why they happen, and how you can avoid them:

1. Using http:// instead of https://

Forgetting a single "s" can get you in a lot of trouble when testing an API. Some APIs may only support HTTPS, while others may support HTTP for some endpoints and not others.

Even when an API supports both, you might still run into some errors. For example, some APIs redirect HTTP traffic to their HTTPS counterpart, but not all frameworks will be configured to follow a 302 status code. Node.js `request` module, for example, will follow GET redirects by default, but you have to explicitly set `followAllRedirects` to `true` if you want to follow redirects to POST and other methods.

APIs may also stop supporting HTTP, so it's important to stay up-to-date with any changes. Good API providers will let users know beforehand via email and any social media channels they have. Another step you can take is to use a tool like Hitch, which lets you follow certain APIs and be notified if anything changes.

If you're asking yourself if your API should support HTTPS, then the answer is yes. The process for getting certificates used to be a hassle, but with solutions like Let's Encrypt and Cloudflare, there's no excuse to not support HTTPS. If you're unsure why you should do it, or don't think you should because you're not transmitting any sensitive data, I highly recommend reading "Why HTTPS for Everything?" from CIO.gov.

2. Unexpected error codes

A good API error message will allow developers to quickly find why, and how, they can fix a failed call. A bad API error message will cause an increase in blood pressure, along with a high number of support tickets and wasted time.

I ran into this issue a couple of weeks ago while trying to retrieve an API's access token. The code grant flow would return an error message saying that my request was invalid, but it wouldn't give me any more details. After an hour banging my head against the wall, I realized I hadn't paid attention to the docs and forgot to include an Authorization header with a base64 encoded string of my application's client_id and client_secret. 

Good usage of HTTP status code and clear error messages may not be sexy, but it can be the difference between a developer evangelizing your API and an angry tweet.

Steve Marx had this to say in "How many HTTP status codes should your API use?": "...developers will have an easier time learning and understanding an API if it follows the same conventions as other APIs they’re familiar with." As an API provider, you don't have to implement 70+ different status codes. Another great advice by Steve is:

"Following this pragmatic approach, APIs should probably use at least 3 status codes (e.g. 200, 400, 500) and should augment with status codes that have specific, actionable meaning across multiple APIs. Beyond that, keep your particular developer audience in mind and try to meet their expectations."

Twilio is a great example of best practices for status code and error messages. They go the extra mile and include links in their responses, so the error message is concise while still providing the developer with more information in case they need it.

{
  "code": 21211,
  "message": "The 'To' number 5551234567 is not a valid phone number.",
  "more_info": "https://www.twilio.com/docs/errors/21211",
  "status": 400
}

As API consumers, we need to be careful and not assume that an API 200 status code means the request made a successful call and returned the information we want. Some APIs, like Facebook's Graph API, always return a 200 status code, with the error being included in the response data. So, when testing and monitoring APIs, always be careful and don't automatically assume that a 200 means everything is ok™.

Another great resource about response handling is Mike Stowe's blog post on "API Best Practices: Response Handling."

3. Using the incorrect method

This is an easy one, but surprisingly common. A lot of times this can be blamed on poor documentation. Maybe the endpoints do not explicitly say what methods are supported between GET/POST/PUT etc., or they have the wrong verb.

Tools can also play tricks on you if you're not careful. For example, let's say you want to make a GET request with a request-body (not a great practice, but it happens). If you make a curl request using the -d option, and don't use the `-XGET` flag, it will automatically default to POST and include the `Content-Type: application/x-www-form-urlencoded` header.

This post by Daniel Stenberg (author and maintainer of curl) on "Unnecessary use of curl -X" also illustrates another possibility you might run into this issue when dealing with redirects: 

"One of most obvious problems is that if you also tell curl to follow HTTP redirects (using -L or –location), the -X option will also be used on the redirected-to requests which may not at all be what the server asks for and the user expected."

Other times, we might fall into past assumptions and just use the wrong method. For example, the Runscope API uses POST when creating new resources, such as test steps or environments, and PUT when modifying them. But Stripe's API uses POST methods when creating and updating objects.

Both approaches are valid, and Stormpath has a great blog post talking about their differences, and how to handle them as an API provider. No matter which one you choose, just be consistent throughout your API and make sure to have correct and up-to-date docs, so your users don't run into this error.

Sending invalid authorization credentials

APIs that implement OAuth 2, such as PayPal, usually require the developer to include an `Authorization` header for each request. It's common to confuse that with `Authentication` instead (I did exactly that while making the GIFs for our last blog post.), so if your request is failing, make sure you're using the correct word.

Another issue that pops up with Authorization headers is actually constructing it correctly. OAuth 2 tokens need to be prepended with "Bearer" for them to work:

Authorization: Bearer your_api_token

It's also important when using HTTP Basic authentication to pay close attention to the syntax of the header value. The form is as follows:

Authorization: Basic base64_encode(username:password)

Common mistakes include forgetting the 'Basic ' (note the space) prefix, not encoding the username and password or forgetting the colon between them. If an API provider only requires a username without a password (like Stripe, where your API key is the username), you'll need that pesky colon after the username, even if there's no password.

5. Not specifying Content-Type or Accept header

Accept and Content-Type headers negotiate the type of information that will be sent or received between a client and server. Some APIs will accept requests that don't contain any of those headers, and just default to a common format like JSON or XML.

Other APIs are a little more strict. Some might return a 403 error if you're not explicit about the Accept header value, and require you to include those headers on requests. That way, the server knows what information the client is sending, and also what format they expect to receive in return.

This issue can also be cause some confusion if you are testing your API with different tools. curl, for example, along with other popular testing tools, will automatically include an `Accept` header for any MIME type: `*/*` with every request. We, at Runscope, don't add a default Accept header, so this can get you different results when testing the same endpoint.

6. APIs returning invalid content type when there is an error

An API response with a full HTML page in the body parameter

An API response with a full HTML page in the body parameter

I can say that this is one of my pet peeves with APIs. Seeing that <!DOCTYPE HTML> line in a response makes my blood pressure go sky high.

Well, sometimes that's my fault. If you forget to send an `Accept` header with your request, the API can't be sure what response format you're expecting.

For API providers, some frameworks and web servers default to HTML. For example, Symfony, a PHP framework, defaults to returning a 500 HTML error. So if you're creating an API that has no business returning HTML, make sure to check the defaults error response.

Another reason this might happen may not have to do with your API, but with the routing mesh or load balancer that sits in front of your API. For example, if you have a nginx instance fronting your API and it encounters a request timeout or other error, it may return an HTML error before your API instances even have a chance to know what's going on.

Conclusion

These are some of the most common mistakes we have seen across multiple APIs. This list could go on for much longer, so if there's some other error you came across and managed to fix it, please share it with us @Runscope.

As API providers and API consumers, these mistakes can sometimes go unnoticed and waste thousands of debugging hours. To get more visibility into your APIs and avoid getting stuck in bad redirects or unexpected error codes, check out our API testing and monitoring tools with a free trial account.


Tutorial: Continuous Integration with Runscope API Tests and Codeship

$
0
0

A part of an engineer's toolbox is a good automation workflow. One of the tools in that box is a Continuous Integration provider, which can help teams prevent integration problems and improve quality control.

Runscope can be used by itself, running and monitoring APIs, but another use case is combining it with your CI workflow. Our CEO, John Sheehan, created a sample Python script that can be used to trigger a set of tests in your Runscope account, and change your application build status based on its results.

In this tutorial, we're going to show you how to use the sample script with Codeship. We'll cover how to:

  1. Generate a Runscope API access token
  2. Get your API tests Trigger URL
  3. Set up your environment variables in Codeship
  4. Run the Python script as part of your CI test commands

We have also included a section at the end for Codeship Pro users. :)

Let's get started!

Setting up the Python script

You can find the script sample in our GitHub organization:

The two files we're interested are "requirements.txt" and "app.py". For this tutorial, I'm just going to work with the raw links from our GitHub repository.

If you're integrating this in your project, I highly recommend either forking this to your own repository, or adding these files to a separate folder. That way, you can prevent your build from breaking in case there's an update to the repository.

Getting your Runscope variables

Trigger URL

When running this script from the command line, you can pass one parameter to it, which is a Runscope Trigger URL. If you want to run a single set of tests, you can find the tests trigger URL under your environment settings:

Runscope test environment settings with the Trigger URL option selected

If you want to run all the test in a bucket, you can find the trigger URL in your bucket settings:

Runscope bucket settings page, highlighting Trigger URL section

Generating Your Runscope API Key

We need a Runscope personal access token to interact with the Runscope API and retrieve the results from our test run.

To get your personal token, head over to https://www.runscope.com/applications, and create a new application. You can just use dummy URLs for the app and callback URL values (e.g. http://example.org).

Scroll down to the bottom of the page, and copy the personal access token value. We're going to use that in the next step.

A Runscope applications settings page, with the Personal Access Token highlighted


Integrating with Codeship

In your Codeship account, select the project you're working with and click on Project Settings -> Environment Variables.

We only need to add one key here named RUNSCOPE_ACCESS_TOKEN. Paste the value that you copied in our previous step, and click on "Save Configuration".

A Codeship environment project settings page, showing how to set the environment variable "RUNSCOPE_ACCESS_TOKEN"

Now, select "Test" on the left-hand "Project Settings" menu.

The Codeship environment already comes with python and pip pre-installed. The first thing we need to do is make sure the necessary packages for the script are installed. Add the following command to your "Setup Commands" window:

pip install -r https://raw.githubusercontent.com/Runscope/python-trigger-sample/master/requirements.txt

Note: Remember to change the requirements URL to your fork or local file.

Next, I'm going to add another command just below it to download our `app.py` file (you can skip this step if you copied the files to your project):

wget https://raw.githubusercontent.com/Runscope/python-trigger-sample/master/app.py
A Codeship test project settings page, showing the "Setup Commands" text box with the bash commands to install the script requirements and download the Python script

Finally, under the Configure Test Pipelines header, in the "Test Commands" window, we can run our script. It takes one parameter, which is the Trigger URL you copied at the beginning of this tutorial. So we can just run the command as:

python app.py https://api.runscope.com/radar/your_test_trigger_id/trigger?runscope_environment=your_runscope_environment_id

Note: Make sure to change the URL after `app.py` to your tests Trigger URL.

A Codeship test project settings page, showing the "Configure Test Pipelines" text box with the bash command to run the Python script

A Codeship test project settings page, showing the "Configure Test Pipelines" text box with the bash command to run the Python script

Checking Your Build Runs

In your next build runs, you should be able to see a new step running the Python script, and hopefully returning a green checkmark ✅:

A successful Codeship build run, with a line highlighted for the command that runs the Runscope Python script

Integrating with Codeship Pro

Codeship Pro is a more advanced offering from Codeship. It offers you more control and customization over your CI process, and includes native Docker support.

If you're using the Pro plan and want to integrate your workflow with Runscope API tests, we created a sample project that you can find here:

https://github.com/Runscope/codeship-pro-tutorial/

It's based on Codeship's tutorial project, and it includes instructions on everything you need to set it up. When you run it, it will spin up a separate container to run your Runscope API tests, in parallel to the sample Ruby application.

Continuous Integration Complete

With Runscope integrated into your CI process, we hope that you have even more confidence in your builds and that your APIs will be 200 OK.

We used Codeship in this tutorial, but these instructions should also apply to other CI providers like CircleCI or Jenkins (we have a Jenkins plugin). If you need any help with those, please reach out to our awesome support team.

Are you integrating Runscope in your build process? If so, we'd love to hear how you're doing it, just reach out to us at heitor@runscope.com or on Twitter @Runscope.

Related Resources


If you're new to API monitoring and testing, learn more about Runscope's cloud-based API testing solution and sign up for your free trial account today.


How Trustpilot Tests and Monitors 200 Microservices

$
0
0
Trustpilot and Runscope logos

Trustpilot's journey with Runscope began back in 2015. They started off with our free trial plan, and have since grown to almost 5 million test runs a month. They were kind enough to chat with us about their experience, and how they transitioned from a single monolithic application, to a more efficient microservice architecture with over 200 microservices and hundreds of endpoints.

Here's Dumitru Zavrotschi sharing their story:

First thing first, tell us a little bit about yourself.

My name is Dumitru Zavrotschi, and I'm a Test Automation Engineer at Trustpilot. I have been working here for over 2 years, and I've worked in QA for over 7 years, mainly for companies in the telecom business.

What does Trustpilot do?

Trustpilot is an online review community, and our goal is to help people buy with more confidence by creating a community based on trust and transparency. Customers can write reviews on our platform about any company, service, or product. We also help businesses interact with our community by providing them with tools to build their reputation, drive conversion, and improve their services.

How big is the company? How many people are in the development team?

We are about 500 people located all over the world. Our biggest office is in Copenhagen, Denmark with around 300 people, and we have other offices in New York, Denver, London, Melbourne, and Berlin.

The development team is made up of about 50 people, divided into 7 teams. The QA team consists of 3 people right now, and we work across all development teams. We test and monitor every new application or microservice, from start to finish, and make sure that they match our quality standards and work well with other services before deploying it to production.

Tell us a little bit about the transition from a monolithic application to microservices. What was that like?

A few years ago, when the company was just getting started, all we had was a single repository with a monolithic application. As the team started growing, and we needed to add more features and more services to our product, we realized that we had to break down our application into smaller pieces to allow for faster iteration.

The transition happened gradually, and now the biggest parts of the monolithic application are decoupled. The process is still ongoing, but now our development teams are able to start working on new features and shipping them much faster.

 

"We were able to increase the number of deploys from twice a week to 70+ (40+ staging, 30+ production) times in a working day."

 

But we did run into a side-effect during that transition that we weren’t expecting. With the increase in changes and deployments we were doing, things started to break. For example, we would run into issues because of misconfigured routes, or a new service wouldn't be able to handle the amount of traffic and then other services would fall over.

These errors started happening, and sometimes we would struggle to find out what was wrong. We needed more visibility into our APIs and how they were communicating with each other.

Worst of all were when these errors happened, and we wouldn’t notice them. Sometimes we would just find out that a microservice was broken for the whole weekend when we got back to the office on Monday. We realized we needed a solution to monitor our APIs, and also a better way to catch things before they were pushed to production.

What made you choose Runscope as your API monitoring solution?

We tested a few different products but decided to go with Runscope for a few reasons. First, it was very easy to setup and start using it. Second, it had most of the features that we wanted, the biggest one being the ability to write custom scripts. And lastly, it had integrations with other tools that we were using, such as Slack, PagerDuty, and Ghost Inspector.

We started by testing five critical features in our application, and then expanded from there. We were also able to leverage the API to automate and customize a few tasks to fit into our workflow.

Nowadays, we rely heavily on shared environments, script libraries, and subtest steps to avoid as much duplication as possible.

How do you use Runscope at Trustpilot?

Each team has their own separate Runscope bucket with tests for their microservices and APIs, and we have a few others for 3rd-party services and internal tools. We have over 20 buckets and more than 200 tests in use.

Our tests range from one or two requests that check a single endpoint, to full integration tests. Those include setup and teardown of test accounts, 3rd-party application tests, and many others with 10+ steps that will simulate a user’s actions throughout our website, and make sure that everything is working correctly.

For example, one test will set up a test account, create a review, publish the review, get the published review, and delete the test account. Another example is a test that integrates with our email platform, to retrieve an automated email link and test that it is valid.

We also have a Runscope bucket that contains a few shared tests that everyone can use. For example, we have a test to automatically refresh and update new access tokens, and other teams can use that as a subtest step.

What is your CI/CD process?

We use Octopus, TravisCI, and Appveyor for our CD/CI process. Once something is committed to our master branch, we do an auto-deploy to our staging environment. Our API tests run on a scheduled basis for both our staging and production environments, so no bugs can leak from staging to production.

Even with the continuous monitoring on staging, in some cases, we have tests integrated into our Octopus deployments. They make sure that all features are working before it gets deployed to production.

What teams interact with Runscope?

All the development teams in the organization use Runscope, which includes managers, QA, developers, and DevOps.

Since every team has their own bucket, we also have some screens in the office displaying a bucket's dashboard, so anyone can easily see the health of their microservices.

We have also built an internal "Alerts Dashboard", which displays alerts coming from different sources like New Relic, Octopus, AWS, and also Runscope. So every team has all alerts in one place, which is very convenient for them. For Runscope, we only display tests that have failed 3 consecutive times, to avoid displaying random minor issues.

What are your plans for improving QA at Trustpilot in the future?

I'm the person that's most familiar with Runscope, so currently, whenever we create a new microservice, I'll create a new set of tests for it, or a template for teams to start working with. But the goal is to empower more people in our organization to use Runscope and create tests themselves. We're having a few internal presentations in the next weeks for technical and non-technical usage, so our support team, managers, and designers also feel comfortable using it.

We're also planning to expand our set of tests to include stress, performance, and security testing, to make sure our systems are operating correctly outside standard capacity, and are scalable to support heavy usage.

Conclusion

Vikram Mahishi (left) and Dumitru Zavrotschi (right) from the Trustpilot team

Vikram Mahishi (left) and Dumitru Zavrotschi (right) from the Trustpilot team

A big thanks to Dumitru and Vikram from the Trustpilot team for taking the time to share their story with us. If you want to learn more about Trustpilot, you can check their website here.

We're looking to tell more stories about how people are using Runscope to make their jobs easier. If you're interested in sharing your story, please reach out to us.
 


Do you need more visibility into your APIs and microservices architecture? Learn more about Runscope's cloud-based API monitoring and testing solution and sign up for your free trial account today.


Tutorial: Introduction to Monitoring SOAP APIs

$
0
0

The SOAP vs. REST debate might have ended with REST as the clear winner in adoption for the most recent years, especially when we're talking about public APIs. However, SOAP APIs are still available and being used, especially in maintaining support for legacy systems, or in specific industries, such as financial and telecommunication services. Even big tech companies still have SOAP APIs available, such as PayPal, Flickr, and Salesforce.

Runscope supports testing any HTTP request, which includes making SOAP requests. In this tutorial, we're going to walk through how to test a SOAP API that returns geolocation information based on an IP address and validate its response using assertions and post-response scripts.

Testing a SOAP API Endpoint

Here's what a request to our IP2Geo SOAP API looks like:

POST /ip2geo/ip2geo.asmx HTTP/1.1
Host: ws.cdyne.com
Content-Type: text/xml; charset=utf-8
Content-Length: length
SOAPAction: "http://ws.cdyne.com/ResolveIP"

<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
  <soap:Body>
    <ResolveIP xmlns="http://ws.cdyne.com/">
      <ipAddress>string</ipAddress>
      <licenseKey>string</licenseKey>
    </ResolveIP>
  </soap:Body>
</soap:Envelope>

We're using SOAP 1.1 here, but the API also supports SOAP 1.2, and you can test both versions using Runscope.

For this example, I'm going to be using my IP address, which will return an address from Chicago, in the US. You can use your own public IP address in the following steps if you want, and you can find it by searching for "what's my IP" on Google, or typing the following command in your terminal:

curl httpbin.org/ip

The first thing we need to do is create a new test in the Runscope interface, and set the following parameters:

  1. Method: POST
  2. URL: https://ws.cdyne.com/ip2geo/ip2geo.asmx
  3. Headers:
    • SOAPAction: "http://ws.cdyne.com/ResolveIP"
    • Content-Length: length
    • Content-Type: text/xml; charset=utf-8
A request step in the Runscope interface, with the fields filled out from the previous bullet points.

Then, we need to add our request's envelope body by clicking on "+ Add Body", and set two parameters:

  • ipAddress - Change it to your IP address. In this case, I'm using my IP which is "73.247.157.30".
  • licenseKey - Set this to 0 since we're just testing the API.
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
  <soap:Body>
    <ResolveIP xmlns="http://ws.cdyne.com/">
      <ipAddress>73.247.157.30</ipAddress>
      <licenseKey>0</licenseKey>
    </ResolveIP>
  </soap:Body>
</soap:Envelope>
A request step in the Runscope test editor interface, with the Parameters field containing the previous XML envelope.

Now, we just click on "Save & Run" at the top to run our request. 

We can see the results by clicking on the first item on the left-hand side, under "Recent Test Runs". Our SOAP API is going to return an XML object, and it should look similar to this:

<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope
  xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <soap:Body>
    <ResolveIPResponse
      xmlns="http://ws.cdyne.com/">
      <ResolveIPResult>
        <City>Chicago</City>
        <StateProvince>IL</StateProvince>
        <Country>United States</Country>
        <Organization />
        <Latitude>41.97279</Latitude>
        <Longitude>-87.6616</Longitude>
        <AreaCode>773</AreaCode>
        <TimeZone />
        <HasDaylightSavings>false</HasDaylightSavings>
        <Certainty>90</Certainty>
        <RegionName />
        <CountryCode>US</CountryCode>
      </ResolveIPResult>
    </ResolveIPResponse>
  </soap:Body>
</soap:Envelope>

Note: if you get an invalid response, make sure that you're using a valid public IP address in your request.

Validating Response Data with Assertions

Ok, so we know that our test and our SOAP API is working. For the next step, we want to make sure that our response is returning the correct data. There are two ways we can create assertions in the Runscope interface:

Built-in XML Assertions

The first way we can do that is by clicking on the "Assertion" tab in our test editor:

Showing the Assertions tab for a request step in the Runscope test editor interface.

We already have a default assertion set up that we expect our test to return a 200 status code. Now, let's add another assertion that will check that the `City` element is equal to the city our IP address resolves to.

To do that, we first click on "+ Add Assertion". Then, we set our "Source" to "XML Body", and under "Property", we're going to set it to: "//*[local-name()='City']/text()"

That is an XPath expression we're using to search for all elements named "City", and then extracting their text value. You can learn more about XPath expression and play around with them by using this XPath Tester/Evaluator tool.

We also need to set the Comparison field to "equals", and the Target Value to the city your IP address resolves to. In my case, I'll just set it to "Chicago":

Showing a new assertion added to our request step to extract the "City" element from our XML response and compare it to the string "Chicago".

Now, we can rerun our test and see if it's still passing by clicking on "Save & Run" at the top, and then heading over to our test result:

Showing the successful assertions in our test run result page after running our test, checking that it returns a 200 status code, and that the city our IP resolves to is equal to "Chicago".

Post-response Scripts

Another way for us to test that our response data is correct is by using the Post-response Scripts feature. We can use one of the included libraries, marknote XML Parser, to work with the XML response and retrieve the elements we want to test.

So, if we wanted to do a similar assertion as we did in the last step, we can go to the Post-response Script tab in our test step and add the following script:

var str = response.body;
var parser = new marknote.Parser();
var xml = parser.parse(str);
var body = xml.getRootElement();
var resp = body.getChildElement("ResolveIPResponse");
var result = resp.getChildElement("ResolveIPResult");
var city = result.getChildElement("City");
assert(city.getText() === "Chicago", "Windy City is correct!");
Showing the Post-response Script tab in a request step, containing the previous script to extract the "City" element from the XML response and assert that it's equal to "Chicago".

And we can see our script output and success message in the test result:

Showing the test result page from our last request, which includes the assertions and a new "Scripts" part, showing that the assertion from our script was successful and the script succeeded.

Using scripts can be useful if you plan to do something more complex with the data you get back from your SOAP API. You can also combine scripts with snippets to avoid repeating that boilerplate XML parsing code in our example script, and reuse assertions across multiple tests.

Creating Tests with the Traffic Inspector

One last tip before we finish this tutorial, there's a quicker way to create tests than via the interface in case you already have an application that's using the SOAP API.

If you have the ability to edit your application that's using the SOAP API, you can change its URL to a Runscope Traffic Inspector URL, and capture those requests. That way, you can just go to your bucket, head over to the Traffic tab, and then convert those requests into tests. That can be a quicker and easier way to create properly formatted SOAP requests for your API.

You can learn more about how to use the Traffic Inspector in our documentation.

Conclusion

Testing and monitoring SOAP APIs is as easy and important as maintaining REST APIs. Whether they are legacy systems or external dependencies that you have to support, you can rely on Runscope to make sure everything is going to be 200 OK®.

If you need any help creating your SOAP API tests, please reach out to our awesome support team.


Do you need to test and monitor SOAP web services? Learn more about Runscope's cloud-based API monitoring and testing solution and sign up for your free trial account today.


Viewing all 231 articles
Browse latest View live