Quantcast
Channel: Runscope Blog - API Monitoring and Testing
Viewing all 231 articles
Browse latest View live

Beyond Black Friday: 3 Tips on Preparing Your APIs for a Major Event During Any Season

$
0
0

Black Friday is around the corner, signaling the beginning of the holiday crunch. For retailers, this is make-it-or-break-it season, a crucial revenue period when they absolutely can’t afford to their apps or their website break or fail. Part of that means making sure their APIs are rock solid. Even a short outage on Black Friday or Cyber Monday can mean huge losses for retailers and their partners. During last year’s holiday season, mobile accounted for 45 percent of all online traffic, a year-over-year increase of more than 25 percent.

But retailers are not alone, and many businesses will encounter a time in which they can’t afford even the slightest breakage: the Super Bowl for media, tax day for finance, a big event or launch announcement, or the close of a long release cycle. You want your code to be functionally complete and protected from introducing any new changes to prevent what could be a very costly and/or public failure.

Since retailers do this every year, there’s a lot to learn from the practices they have down pat. We’ve organized three tips that we’ve seen in the retail industry to get you ready for crunch time so you’ll be ready to face any mission-critical event—even Black Friday.

1. Go into a development lockdown

To refrain from introducing any regressions, slow performance or otherwise screw up the customer experience during the holidays, nearly every retailer puts a freeze on their code as early as October or the first week of November in preparation for the retail season. This is not the time to experiment with brand new things that could destroy your numbers.

“During a major event that brings a significant number of customers to our site and apps, it’s critical that we don’t introduce any unnecessary changes that could impact the consistency and reliability of the customer experience,” says Roberto Lancione, IT Enterprise Architecture Manager at ALDO.

The most minor aesthetic change can alter the way customers interact with your site or your mobile app, so put A/B testing to a halt as well. Keeping your UI consistent will keep your users comfortable and unaware of the potential frenzy going on behind the scenes.

2. Test everything, everywhere

While A/B testing introduces unnecessary complexity, testing your system as a whole is essential to ensuring the end-to-end customer experience is seamless. You want to prevent any performance issues during days of peak traffic to your site or app. One of the largest retailers in the country tests its production environment to the point of almost bringing its site down to stay ahead of any issues as much as possible.

One way to do this is ensuring that your app works across all devices. AWS Device Farm is a handy testing tool that lets you simulate your mobile app on hundreds of different smartphones and tablets so you don’t have to build these tests in-house, and can plan for performance across devices you might not have thought of.

Once you’ve tested your system internally, it’s time to put those tests on a schedule to monitor performance so you’re the first to know when a service goes down. Monitoring the APIs that power your infrastructure and your apps from locations around the world ensures that you have your customers’ backs, no matter where they’re located. Make sure that your web UI is interacting with your APIs as expected and data isn’t being cached the wrong way. Easy-to-use tools like Ghost Inspector give you full coverage between your APIs and your websites.

3. Monitor API dependencies

Putting on a code freeze and monitoring your services is great for your own infrastructure, but what about the external services your business relies on? Most apps, let alone businesses, today integrate with multiple partners or third-parties to add to or enhance their product or service, such as payment transactions, email confirmations and receipts or a CRM. 

First, don’t change providers during a critical event like Black Friday. Even if you’re looking to make a switch, this is not the time to introduce external changes when short-term success is the goal. Get all your SLAs in order so you know what to expect from those providers in case their services go down.

Second, keep an eye on the production traffic that matters to your business by setting up real-time monitoring. Live Traffic Alerts notifies you the second a customer gets a 403 error that prevents them from paying for your product or getting an email receipt. You can then send the response body data to your provider instantly to let them know there’s an issue on their end faster than waiting for support tickets to come through.


We all face periods where our web services and mobile apps are put to the test. Even if your only participation in Black Friday this year is buying the perfect present for your mom, you can still reap the benefits of the long-standing work that retailers have done to build best practices for preparing for those events. Happy shopping and monitoring!


'Tis the Season: Going Shopping for Retail APIs

$
0
0

Today’s retail scene is a shopper’s paradise—from one-click purchases to customized web experiences to personalized in-store experiences, it’s become easier than ever to spend money on yourself and loved ones. As consumers buy more via mobile and other devices, and IoT becomes more than just a buzzword, retailers rely on robust sets of APIs to bring consumer data in and make these in- and out-of-store experiences unique and consistent.

A report from RetailMeNot and Forrester says that sales via mobile moments will reach $689 billion in 2017. The study explains that receiving the right data from customers and being able to share that data with the right partners and services is a key factor for retailers to be successful in mobile. A major way of facilitating those exchanges is with APIs.

Many retailers across the globe have already made the transition to technology-focused business strategies with APIs as the connective tissue between their infrastructure and their consumers. Additionally, several new platforms in the ecommerce space have cropped up to connect retailers and consumers.

Let’s take a look at some of the ways that retailers are leveraging APIs to improve infrastructure efficiency and extend their reach.

Transforming Retail from the Inside Out

High end brands with customers all over the country and the world are investing massive resources in their digital strategies, starting with their internal infrastructure. Iconic British brand Burberry began an internal technology revolution a few years ago beginning with supply-chain logistics and in-store analytics. The company has evolved to a microservices architecture powered by APIs that allow the company to make profitable integrations with platforms like Instagram.

Nordstrom has made significant investments in infrastructure that have allowed the company to grow by more than 50% in five years. A consistent multichannel experience that gives flexibility to customers keeps them coming back and making purchases.  

DevOps and APIs have been huge enterprise-wide initiatives at Target, providing the company with new revenue streams and ecosystems beyond its brick-and-mortar walls. Target’s API team has evolved the company’s infrastructure to more than 80 APIs that are essential in its continuous integration (CI) and continuous deployment (CD) processes.

One of the biggest retailers in the UK, Tesco, has a Tesco Labs division that launched the company’s Grocery API and app in 2014. The group is now focusing on hackathons and frictionless payments, among other things, to make shopping for groceries a more efficient and personalized process.

Engaging the Developer Customer

Some companies are harnessing the power of developers by making their APIs publicly available. Macy’s has a long-standing developer program, complete with its own Twitter account, for five APIs, including Ad Media Services, Catalog Services and more. These APIs are behind Macy’s mobile strategy that includes apps like a find-in-store app and catalogue app, as well as in-store tools for employees.

Macy’s sister company Bloomingdale’s also has a public developer portal for a variety of APIs powering mobile apps like its shopping app, seasonal catalogue and designer collections.

International shopping center corporation Westfield created its own innovation segment in Westfield Labs to bring the in-person experience of shopping in a mall to life with APIs. The group launched the Dine on Time app for real-time meal delivery from its San Francisco Centre’s food court last year, and plans to extend to targeted displays of relevant products at nearby stores.

Walmart also has a Labs arm with a public developer portal that provides access, I/O docs and documentation to 17 APIs.

Best Buy has had an open developer platform since 2009, encouraging third-party devs to create apps for both employees and consumers with eight different APIs, from Buying Options to Recommendations.

Technology Platforms Supporting Retail

Many of the APIs that support retail today aren’t on the retailers’ side, but rather cropping up in services and platforms that complement and extend big brands’ efforts with new technology.

Adding functionality to apps and web, Bazaarvoice is a platform that lets retailers leverage user-generated content like ratings and reviews and Q&A forums to drive engagement, and ultimately revenue. The company has a robust developer portal around its Conversations API so developers can easily build this functionality into any retail or ecommerce app.

APIs are also facilitating new, faster ways for consumers to make purchases. Take Rue La La, a members-only site that sells designer brands at reduced prices, which successfully integrated with the Google Wallet API last year to speed up paths to purchase. Rue La La as well as Neiman Marcus have also built integrations with MasterPass, MasterCard’s digital wallet, for faster in-app purchases.

For in-store analytics, services like RetailNext pull in data from sources like cameras, PoS systems and even weather, and then push tailored communications to apps and third-party apps, analytics solutions and more. This service allows retailers to better understand how customers shop in their store and then customize the experience based on multiple data points.

Even non-retail companies like Orange, a telecommunications company based in France, has created mobile APIs to facilitate the connection of new technologies like near field communication (NFC), which builds on RFID and quickens the shopping experience by making things like credit card payments faster. The telco released an NFC API toolbox for developers to build the technology into mobile apps for things like loyalty programs and digital shopping carts.

APIs Pushing the Boundaries Beyond Apps

This list is not exhaustive, but from these real-world retail use cases, it’s clear that APIs will continue to shape the shopping experience by continuing to evolve the mobile experience and fueling exciting additions within the physical store. With APIs, shopping isn’t dead, and in fact, retailers can actually teach us all a few things about maintaining our APIs. Many retailers and other companies with mission-critical APIs make sure those APIs are always up and performing as expected by monitoring for real-world use cases and key API transactions with API monitoring tools like Runscope. It's free and easy to get started, even when the season of sales is over.  

So the next time you find yourself getting coupon alerts on your phone while in a store, or entranced by a big screen in a mall, remember, there’s probably an API for that.  

This Fortnight in APIs, Release VII

$
0
0

This post is the seventh in a series that collects news stories, helpful tools and useful blog posts from the previous two weeks and is curated lovingly by the Runscope Developer Relations team. The topics span APIs, microservices, developer tools, best practices, funny observations and more.

For when you came for the API talks, but stayed for the BBQ:

Last month at APIStrat in Austin, Texas, we had a great time coming together with the API community to get into the nitty gritty about problems we all face and creative ways to solve them. This recap from Desiree Schillinger is a pretty comprehensive analysis of 6 tips for developer marketing that go from API documentation on the front-end to leveraging your role within your company. Schillinger even brings in outside philosophies like Maslow’s hierarchy of needs and compound interest to help explain these tips. If you’re looking for more APIStrat, check out this collection of key talks from Restlet and this summary of the two-day conference from Clarify.

For when you want your auth to have the best of both worlds: 

API authentication has always been a challenge of finding balance between security and convenience. This is particularly true for browser-based apps. But for securing APIs in server-to-server scenarios like microservices, or native applications talking to servers, we can consider implementing more risk-appropriate security. This post by Craig Weber discusses why asymmetric cryptography (RSA key pairs) as authentication is useful. He’s also written a Python/Django implementation and middleware system. If you need to brush up on JWT, check out the Stormpath blog for many great articles.

For when you didn’t get enough Py at Thanksgiving:

If you’re a Python fan and looking for an easier way to test web apps, the open source community just saved the day. With this open-source Python library, you can execute web browser and UI tests through the Ghost Inspector API. Tests can be specified in a YAML configuration file and triggered using the pytest framework.  

For when choosing an API schema has created an API schism:

Choosing a description format for your RESTful API is never an easy task, especially with the abundance of options. One format may have incredible tooling for docs, but lacks native support for your parameterized URIs. Another format may have the exact vendor integrations you need, but your team has threatened to strike if you make them work with YAML. This article by Chris Wood at Nordic APIs reviews API Transformer, a web service for automatically converting between the most popular schemas, including: Swagger, API Blueprint, RAML, Google Discovery (like JSON-Schema), I/O Docs and more.

For when building analytics can actually be Simple:

Many people tout the benefits of making data-driven decisions in business, but how do you build a foundation that enables everyone in your company to use data effectively? For startups with little headcount especially, instituting analytics that measure the data you want, the way you want, can be difficult and time-consuming. The team at Simple, a fintech company that provides banking and money management tools, is letting you learn from their successes and failures in building and scaling analytics in-house. Get a peek behind their Data team, infrastructure and process for an in-depth overview with lessons you can take to your organization.

For when you've been over-served at the coffee shop and need a ride home:

While much of the competition within the sharing, and ridesharing, economies occurs in the forefront, whether it be in competitive tactics or legal battles, we hear less about what’s going on on the backend for these companies. Uber, which launched an API last year, just upped what developers can do with it by adding a Ride Request Button feature to its API. Developers can now add a Ride Request Button to any app that deep links back into Uber’s app.The company’s affiliate program actually pays devs $5 for each U.S. rider they refer, with the caveat that they cannot hook into any other services like Lyft or Flywheel. Lyft has already had success with this mobile tactic—you can request a Lyft from Slack and Starbucks, among other partners. It will be interesting to see how APIs and developers may play a make-it-or-break-it part of ridesharing companies’ strategies in the face of heavy competition.

For when you need to get Linux back in (command) line: 

We’ve all been there—standing over the shoulders of the sysadmin, her fingers flying with command-line wizardry as she’s figuring out what’s causing Linux server to flail. And before you can even ask, “What commands are you running?” she delivers the prognosis and brings the CPU load average to 0.05. This article from Netflix lists out 10 command-line tools and how to use them to help you analyze your Linux server’s resources in just 60 seconds.

For when you want two tickets to the docs show:

In an effort to make their API documentation more successful, the developer team at Best Buy has opted for open-sourcing their docs—and they’re giving you a front-row seat to watching the process unfold. The team is also migrating their docs to Slate and looping us in on the progress. It’s always nice to see what goes on behind the scenes of our favorite APIs, and it’s especially interesting when it comes from a big brand like Best Buy.

For when your favorite store has more than just an app:

Now that the holidays are upon us, people are buzzing about Black Friday, Cyber Monday and buying the perfect gift. Behind the scenes of the mad shopping rush are numerous APIs powering apps that do more than just let you make purchases from bed. We created an extensive list of some of the biggest retailers’ APIs and how they’re driving business today, ranging from microservices at Target, to targeted IoT displays in Westfield malls, to personalized in-store mobile experiences at Macy’s. Check it out and see which of your favorite brands are putting APIs at the forefront, and the storefront.

Notice something we missed? Put your favorite stories of the past fortnight in the comments below, or email us your feedback!

Using the Runscope API to Dynamically Update Environment Variables

$
0
0

Runscope API tests support variables, allowing you to pass one or more values between requests within a test. Variables are extremely helpful when you need to extract data from one request in your test and then apply or compare that value at another stage of your test. We sometimes refer to this as chaining requests. Variables can be declared before an API test run using Initial Variables (located in Environment settings), or you can create them dynamically after each request within your test. Runscope variables are limited to an individual test run, which means that they are reinitialized with each test run. 

There are many use cases for needing to persist variables, such as storing credentials like an access token in an initial variable. This makes it convenient for making authenticated API requests in your tests and only having to manage those credentials in one place (in the environment settings).

But that convenience wanes when those credentials change often and you need to modify those settings by hand. Fortunately, the Runscope API allows you to update initial variable values programmatically. Leveraging this functionality is easy to do. 

Updating Initial Variables Using the Runscope API

The Runscope API provides access to data in your Runscope account, including API tests, test steps, results, environment settings and much more. Initial variables belong to Environments settings, and are therefore accessible through the Environments resource in the API. The API includes methods for listing, fetching, creating, deleting and modifying environments. 

1. Create a Runscope personal access token—The API uses OAuth 2.0, so you'll need to create an application from the the profile page. This will provide you with a Personal Access Token which can be used as an OAuth bearer token to access data in your account.

2. Fetch the test ID—All Runscope tests have a unique ID. You can obtain this ID from the dashboard URL while viewing/editing your test. Another way to obtain the ID is with the API using the List Tests method which lists all the tests in a bucket. 

Tip: Use the Request Editor to explore the Runscope API. A handy button at the bottom of the application page will load the Request Editor with your bearer token pre-populated in the headers.



3. Fetch the environment JSON object—Pasting the test ID from above, make a GET call to the environments endpoint. This fetches an array of test environments. Select and copy the environment JSON object into your clipboard in preparation for the next step.

4. Edit environment JSON and submit changes—Paste the JSON representation of the environment settings into your favorite text editor and modify the value of the initial variable. Below, we're using the Request Editor to execute a PUT call to modify the environment record, placing the updated JSON in the request body. Notice how the environment ID is appended to the request URL (obtained from the JSON object).

Note of caution: A PUT call to the environments endpoint must be a complete JSON representation, meaning that if you submit a partial object, all of the settings will be overwritten for that test environment. This is why we copied the entire JSON object for the environment, and not just the variables. If you do end up making a mistake editing your tests or environments, you can always roll back to a previously saved state using Runscope Revision History.

Dynamically Updating Environments from within a Test Run

Now that you know how easy it is to make updates to initial variables and other environment settings, you might be wondering if you can embed a PUT call inside one of your tests. And indeed, it will work. But as stated above, proceed with caution—you are, after all, updating a live environment.

There's a Lot More API to Explore

Modifying initial variables is just one example of how the Runscope API can provide test management and configuration flexibility. Every aspect of your Runscope tests can be accessed via the API, including the ability to create and delete tests, environments and schedules programmatically. You can start playing around with the Runscope API by signing up for Runscope for free. Check out the API documentation for information on how to get started, and contact our support team if you need a hand. 

Alert Your On-Call Team of Production API Failures & Anomalies with PagerDuty

$
0
0

When the production APIs that your apps rely on are down or returning unexpected data, notifying your team about these problems is a top priority. Broken services means broken apps and the faster your team is aware of issues, the faster you can solve them.

We’ve teamed up with PagerDuty to integrate Runscope Live Traffic Alerts into the comprehensive incident notification and management system.

PagerDuty provides on-call team management as a service with escalation policies, scheduling and a full complement of notification methods including SMS, phone calls, email and push notifications. Live Traffic Alerts allows you to catch API call failures and exceptions that occur in production. By combining these together, your on-call team can receive notifications about production API call failures as they happen using your existing PagerDuty escalation and notification policies.

Connecting Live Traffic Alerts to PagerDuty

Getting started is easy. Log in to your Runscope account and navigate to Alerts in the top navigation. (If you don't see Alerts, contact our Sales team about getting started.) While creating a new alert or editing an existing one, select PagerDuty from the integrations list. Once your PagerDuty and Runscope accounts are linked together, you can connect PagerDuty to as many Runscope Alerts as you wish.

Automatic Alerts, Integrated Acknowledgement

When a Runscope Alert is triggered, a PagerDuty incident is automatically raised. You can optionally resolve the raised PagerDuty incident by clicking the acknowledgement button on the Alerts dashboard. All details related to Runscope triggered incidents and resolutions are shown in the PagerDuty activity log.

 

Start Catching Production API Failures Today

PagerDuty integration for Live Traffic Alerts is available now. Runscope customers on Medium and larger accounts have access to Traffic Alerts, as well as those on the free trial. Sign up for Runscope today to get started using Live Traffic Alerts. If you're currently on a Free or Small plan and want to see Live Traffic Alerts in action, our Support team will set you up with a trial, or contact Sales to upgrade your plan.

This Fortnight in APIs, Release VIII

$
0
0

This post is the eighth in a series that collects news stories, helpful tools and useful blog posts from the previous two weeks and is curated lovingly by the Runscope Developer Relations team. The topics span APIs, microservices, developer tools, best practices, funny observations and more. This post is also our final Fortnight of this year. Thank you for giving us such great stories to share, and get excited for more Fortnight in 2016!

For when you want to eat your own dogfood, but there’s not a server in sight:

We’ve largely moved away from running bare metal servers in the data center to virtual servers in the cloud. The same can be said of migrating disk, SAN and NAS storage into virtual volumes in the cloud. The team from Teletext.io chronicles how they’ve built their startup entirely on AWS, taking virtualization to the next level and removing all semblances of traditional server architecture. In The Serverless Startup—Down with Servers!, the team explains how they use Amazon API Gateway, Lambda, DynamoDB, S3 and CloudFront—without a single server to manage, not even EC2.

For when visibility is in the eye of the beholder:

In 2015, microservices was certainly one of the biggest buzzwords, sparking debate from both micro- and monolith-supporters about scalability, operational complexity and resources over which is the better infrastructure. The Segment blog recently released a post that explores a different angle: visibility. In Why Microservices Work for Us, the API-led analytics company’s founder Calvin French-Owen describes how monitoring microservices makes it significantly easier to gain visibility into each “microworker”, and how his 10-engineer team has scaled to 400 private repos and 70 different services.  

For when your newspaper starts printing in JSON:

On The New York Times’ blog Open, written by NYT developers about code and development, the company announced that it is open sourcing an internal projectGizmo is a toolkit that offers four packages to help developers configure and build microservice APIs and pubsub daemons. Gizmo came from the company’s years-long move to adopting Go, which it primarily uses for writing JSON APIs. This is an exciting move for NYT establishing itself as a technology company, and the toolkit is great for anyone looking to adopt Go and microservices.

For when all you’re asking for is a little accept (just a little bit) #Aretha:

You most likely use Content-Type in your POST and PUT calls, which informs the API the type of data you’re sending. But how often do you use the Accept header? That’s the other half of content negotiation where you designate the type of response you’d like to receive from the API. This article from Restlet helps you to understand the basics of content negotiation.

For when you use continuous deployments—do or do not, there is no try:

There are a lot of things we know we should be doing, but the barrier to entry is great enough that we shy away—such is the case with deployment pipelines, a process that many are quick to tout the value of, but not as many have had the time to invest in building out internally. In The Case for Deployment Pipelines, Keith Casey, Director of Product at Clarify.io, explains the tools and processes his team used to build continuous delivery pipelines on microservices and AWS. With CD pipelines, Clarify.io now experiences improved workflow, increased confidence and reduced risk, and Casey explains these benefits in depth.

For when you think your docs are all that and then some:

So you’ve defined your API using Swagger or API Blueprint, spun up the docs generator and the API documentation is all done, right? Not quite, writes James Higginbotham. In this article on the LaunchAny blog, he suggests that API docs have audiences that include more than just developers, and that we need to go beyond offering only referential documentation.

For when there are some habits you just don’t want to break:

Monitoring can be a loaded term, and the practice extends from your servers, to your APIs, to third-party APIs, to apps. Librato recently published 7 Habits of Highly Successful Monitoring Infrastructures to help take your monitoring practices from creating problems to solving problems. This article covers all the bases: why you’re monitoring (data, feedback, single source of truth), what you’re using (micro vs. monolithic systems), when to monitor (throughout the entire development lifecycle) and more. In a pinch, you can view a snapshot of the post on the Librato blog.

For when you want to start the new year design-first:

If you couldn’t make it to API Days New Zealand this October, the video of the keynote presentation by Jason Harmon, Head of API Design at Paypal, is now available. Jason presents on Design-First APIs in Practice, beginning with a deep-dive into design principles that are commonly held outside of the software world, but have yet to be widely recognized within software. He then shows how you can apply these principles to your APIs to improve efficiency, discovery and adoption. 

For when you need to find the force within you:

While we’re always looking for the latest tools, it never hurts to take a step back and learn about organizational health and leadership. In a look back on 2015, Codeship has compiled the best of its many interviews with tech leaders over the year in a piece chock-full of advice that answers, What Are the Challenges of Leading in Tech? This post draws on experiences from leaders like Brendan Schwartz, CTO of Wistia; Peter Van Hardenberg, founding developer of Heroku Postgres; and our own Co-founder and CEO John Sheehan.   

Runscope Chats with API Thought Leaders at the API Strategy & Practice Conference in Austin

$
0
0
Neha Sampat of Built.io presents a keynote on day 2 of the conference, image from APIStrat.

Neha Sampat of Built.io presents a keynote on day 2 of the conference, image from APIStrat.

Last month, we had the pleasure of participating in the API Strategy & Practice Conference (APIStrat), one of the leading API conferences for developers to come together and discuss API trends, tools, successes and challenges. APIStrat, organized by APIEvangelist and 3scale, draws an enthusiastic crowd and impressive lineup of speakers spanning API practitioners and consumers from a variety of industries. Runscope Co-founder and CEO John Sheehan spoke on Crafting a Great Webhooks Experience, and VP of Developer Relations Neil Mansilla gave a talk on The Journey to Effective API Evangelism. One of the highlights of the conference was the live recording of the final episode of the Traffic & Weather podcast featuring John Sheehan and Steve Marx, Engineering Manager at Dropbox. 

We sat down with some of the speakers at APIStrat to chat about how APIs are changing the business conversation. Hear from Jason Harmon, Head of API Design at Paypal; Kristen Womack, Director of Product at LeadPages; Kin Lane, The API Evangelist; John Sheehan and Steve Marx in the video below, and let us know what you're excited to see in the API world in 2016 in the comments!

Looking for more APIStrat? Dive into these conference recaps:

An Inside Look at 7 Standout APIs

$
0
0

Being an API service provider, we’re constantly on the lookout for how developers are building better APIs. As an API consumer—we rely on dozens of APIs internally and externally to power our business operations—we appreciate well designed APIs that are easily discoverable with a clean interface and quick ways to get started. This year, 7 APIs stood out to us that met all of these considerations.

We chatted with the people who have their hands and hearts in 7 leading APIs to get a peek behind the interface at the challenges they face and the solutions they use. These APIs have some of the best overall developer experience out there, which includes:

  • Immediate utility, like quick shipping integration tools from EasyPost or producing an audio or video transcript with Clarify.
  • Intuitive documentation, like the responsive navigation in multiple languages from Stripe and Clearbit.
  • Top-notch support, like custom integration guides from Chain and SendGrid’s practice of understanding multiple use cases.
  • Reliability, like Plaid publishing uptime stats.

Let's take a look inside these 7 standout APIs to see some of the ways they maintain an excellent developer experience:

1. Chain: Invest in Customer Onboarding

Chain provides modern database technology for the financial services industry with a platform that allows developers to design, deploy and operate blockchain networks. We asked Ryan Smith what elements go into creating a great developer experience:

“We believe that if financial services are built on top of excellent database infrastructure, the excellence will permeate all the way to the end user. To that end, we try are hardest to make the developer/customer experience as great as it could possibly be. We do this in a number of ways.

Instead of having generic API tutorials or guides, we tailor an integration guide specifically for our customer. This is basically a markdown file with a precise plan of what methods to call and when to call them. We also go to great extremes to provide good errors. Every error has a unique code (e.g. CH123) so that we can quickly debug issues when customers hit the error.

Additionally, we invest a lot in our SDKs. The SDKs will always include a request ID in the description of the exception. This way, when our customers get an exception, they just past the details to us and we can quickly go to Splunk to trace the path of the request. We write canaries using our SDKs that exercise the complete set of functionality offered by our API. We then use Runscope to automatically run the canaries and alert us if the canary should fail."

—Ryan Smith, Co-founder and CTO at Chain

2. EasyPost: Maintain Simplicity

The Simple Shipping API, EasyPost allows developers to integrate shipping and tracking info into their apps. We spoke with Saywer Bateman about how EasyPost overcame a familiar challenge with its API:

“This year we've grappled frequently with a common challenge for APIs—maintaining simplicity despite the frequent addition of capabilities. Specifically, we've recently added a lot of capabilities to our address verification product and wanted to avoid naively introducing a handful of new endpoints that would increase the complexity of our API. Instead, we spoke with customers and analyzed our logs to identify a new workflow that better matches how this API is being utilized.

For instance, rather than having distinct endpoints for verifying U.S. addresses, global addresses, or whether an address is residential or not, you can now send EasyPost an enumerated list of verification types to perform when you create an address. This workflow fits our best practices perfectly, is extensible, and reduces the number of requests to our API that customers need to perform.”

—Sawyer Bateman, Lead Project Designer at EasyPost

3. Plaid: Know Your Audience

Plaid is disrupting the financial industry with an API that powers two products, Plaid Connect that collects transaction data and Plaid ACH Auth to help developers set up ACH payments. We asked William Hockey about the company’s API learnings from 2015:

“I think a big learning experience and insight is you can't make everyone happy. It’s a hard combination to make an API both easy to use and simple for someone just trying to hack together a side project—but also full featured and powerful enough for your power users. Also, the Silicon Valley developer standard isn't always the best choice. We've had to deploy in enterprise settings where their gateways won't allow standard HTTP verbs like PATCH and DELETE, so every route has to be aliased to a POST request.”

—William Hockey, Co-founder at Plaid

4. Clearbit: Manage Change for a Seamless Customer Experience

For Clearbit, which provides business intelligence, its business relies on a robust set of six customer APIs, plus multiple pre-built integrations like one with Google Sheets. We spoke to Harlow Ward to learn more about how his team manages its unique API versioning process for a seamless customer experience:

“When introducing breaking changes to an API, you want your customers to keep the version they already have and update when they are ready—so they can test the changes in their own systems before releasing any changes downstream to their users. We use point-in-time versioning, which is a little complex, but it has advantages. For instance, it allows us to make adjustments within our codebase without having to create create new endpoints within our system. It also allows us to easily see which API versions our customers are using. Since we’re anticipating quite a few changes to our APIs in the future as we improve them, this approach works well for us.

The key to all this of course is managing change on the backend so that API versioning is seamless for our customers. Basically, we need assurance that our versioning doesn’t break and that we can maintain this style without a lot of complexity. By running a schedule of regression tests against our API endpoints, we’re able to support tons of API versions transparently. It’s easy for us to build and update tests and add custom headers and versioning so that there are no unknowns.”

—Harlow Ward, Developer and Co-founder at Clearbit

5. Stripe: Keep the Learning Curve in Mind when Introducing API Changes

Payment infrastructure provider Stripe has been focused on developers since day 1, and its API is constantly cited as an example of how to do developer experience right. We chatted with Ray Morgan about how Stripe updated its API this year:

“One thing that I am super excited that Stripe has been able to do over this last year has been to start introducing new payment types into our API. This has been particularly tricky since we don't want force our users to upgrade their API version in order to get a new feature; and at the same time, we don't want to just bolt on new features.

For example, when fetching a list of Stripe charges from our API, your code may be expecting them all to contain a 'card' property that looks like a card. By making old API versions explicitly declare that they want non-card charges in that list (via a GET param), we are able to provide a solution that doesn't break current code. Additionally, this means that upgrading is as simple as first adding that flag to your call sites and testing, then once that change is live, you can upgrade your API version which will basically be a no-op (since you are already getting all the objects). Later, you can choose to remove the (now) extra query param.

Overall, the changes to the API ended up be fairly minimal and completely non-breaking. This is awesome because it means a much lower learning curve for the new features. They also just feel much more a part of the Stripe API.”

—Ray Morgan, Software Engineer at Stripe

6. SendGrid: Anticipate Different API Use Cases

At the core of SendGrid, the world’s largest email infrastructure provider, is an internal and customer-facing API with hundreds of endpoints. We spoke with Matt Bernier about how SendGrid provides support for its API from all angles:

“In my role, I have my hands in all the documentation, development of API libraries and API testing, which means I have three different entry points into our API. This allows me to see the larger picture from the top down and make sure that the API is actually consistent in how it’s presented. It also lets me think about what people are actually going to do with our API.

Working on testing means I can test the inputs and outputs, but also think about how people are using the API differently than what we planned for or built a particular feature for. Using the API in different ways lets me make sure the API works for different customers with different use cases.”

—Matt Bernier, Developer Experience Product Manager at SendGrid

7. Clarify: Approach Hypermedia Wisely

Clarify takes a new approach to transcription with an API for processing audio and video files, the results of which are then easily searchable and can produce reports. Hypermedia is a key approach at Clarify, and we chatted with Keith Casey to learn more:

“As we were expanding the API this year, we went deeper on our hypermedia approach. We choose to go all in on Hypermedia because we don’t know what’s next. Literally.

Our systems are machine learning-based and we analyze and learn things about our customers and their data constantly. Some are specifically things we set out to learn and others are accidental, but valuable, discoveries that help us detect and solve their problems better. For example, in analyzing sales calls to determine which calls resulted in a sale, we’ve discovered a number of patterns that go along with successful and failed sales attempts which then informs our customers’ sales training.

It’s important to remember that part of the underlying goal of REST is to have shared language to promote understanding. Therefore, the most important aspect of hypermedia is making sure your language is as consistent and explicit as possible. Don’t make up terms because they seem cute or applicable within your team. If there’s an industry-common word, use that instead.”

—Keith Casey, Director of Product at Clarify

These tips cover several different aspects of developer experience and maintaining high quality APIs. One of our best tips is to use API monitoring to ensure that if and when anything does go wrong—an endpoint goes down, there's latency or the API is returning the wrong data—you're the first to know and can fix the problem before customers notice. You can sign up for Runscope for free and get started in minutes monitoring the APIs you and your customers depend on most. 

Are there any APIs not on this list that stand out to you? What are some of your best tips for maintaining healthy APIs and a solid developer experience? Let us know in the comments or shoot us an email, we'd love to hear from you! 


View Your Most Important API Performance Metrics in One Place with the API Test Dashboard

$
0
0
dashboard2.jpg

When testing and monitoring APIs, there’s a lot of information to consider outside of just “is my API up or down at this moment.” That’s why we’ve updated the view of your API tests to provide a comprehensive picture of API health. We’re excited to introduce the API Test Dashboard—an at-a-glance view of key API performance metrics for all of your API tests, in one place.

The API Test Dashboard provides a top-down view of key API performance metrics and allows you to interact with your tests at the ground level to quickly catch and debug apparent and intermittent API problems fast. The dashboard provides the same metrics as your daily API Performance Report, including the success rate and average response time over a period of time (1 hour, 1 day or 30 days).

When you sign in to Runscope, navigate to Tests in the top navigation and you’ll see your API Test Dashboard right away. Every test has its own test summary card, which can be sorted to suit your preferences, either by date, last run or failures first. Each dashboard is unique to the user, so everyone on a team can have their own individual test view.

From each test summary card, you can:

  • See the success rate and average response time of the API test over time
  • Run the test from multiple global locations and environments
  • Edit, schedule and duplicate the test
  • View when and from where your last 40 test runs passed or failed 

The Runscope API Test Dashboard gives you the tools to proactively monitor trends around your API test results and service performance so you can stay ahead of even intermittent problems that may cause serious issues for your end-users. Try out the dashboard today by signing up for Runscope for free—during the trial, you’ll get the total package of API monitoring, testing and debugging so you can prevent, identify and solve API problems fast.

Quickstart Guide to the Runscope API

$
0
0

This is part one of a series of blog posts about how to use the Runscope API. In this article you’ll learn how to get started using the API in less than 5 minutes, from creating your first access token to fetching the tests in your bucket.

The Runscope API provides access to the data within your Runscope account. This includes access to read, create, modify and delete nearly every type of resource — from tests, environments, schedules, test results, request logs to buckets. The API makes Runscope incredibly pliable, helping Runscope to fit into how your team develops, tests and deploys software.

We know that getting started with new APIs can sometimes be daunting. But rest assured, if you're even lightly familiar with Runscope, you'll be making API calls in five minutes or less -- all from within Runscope, no tools or coding required.

Step 1: Create an app and access token

Our API uses the OAuth 2.0 protocol for authentication. If you've ever used APIs from GitHub, Twitter, Facebook or Google, then you're already familiar with this first step -- you need create an Application. When an Application is created, a personal access token that's authorized to access data in your account is conveniently generated. Using this token you can begin making API calls immediately without having to step through the account authorization flow.

Notice in the video above that in order to make an authenticated request to the Runscope API, the access token is passed in the Authorization header.

Step 2: Start with your bucket list

All of your tests, environments and logged API traffic are organized into Buckets. Every API method related to managing API tests and logged traffic requires a bucket key. Therefore, the first API call to try is Bucket List method. When you make a GET call to the /buckets endpoint, a list of buckets is returned in the data array. 

Note: bucket keys are unique resource IDs. Bucket key values do not change after a bucket is created and are safe to cache. Bucket names, however, are not guaranteed to be unique (you can have two buckets with the same name).

Step 3: Fetch your tests

Next, try the Test List method. This method lists all of the tests that reside in the specified bucket. Similar to the Bucket List method, a list of tests is returned in the data array. In the screencast below, we’re listing out all of our tests, and then making another call to fetch a specific test using the Test Detail method.

The amount of information returned may at first seem a bit overwhelming; however, if you look closely, you’ll find that it’s rather easy to understand with object key names that intuitively map back to the Runscope UI.

Exploring other methods and resources

Use the List Buckets and List Tests methods above to familiarize yourself with the JSON representation of the Runscope data that you regularly manage in the dashboard UI. Now that you know how easy it is to get started, we encourage you to try out other methods and resources. While you're still learning how to use the API and trying out new methods, we suggest that you create a new bucket that acts as a sandbox, or stick with read-only (GET) methods if you're making calls against production-level buckets/data.

In the follow up articles in this series, we'll dig deeper into the anatomy of the Test Resource (steps, assertions, environments, etc.) as well as how to create new and modify existing tests. Soon enough, you'll feel as comfortable working with Runscope through the API as you do through the UI. Meanwhile, if you need any help navigating the API, check out the API Documentation or contact our Support team.

Original photo by Oscar Rethwill under CC License.

Migrating to DynamoDB, Part 1: Lessons in Schema Design

$
0
0

This post is the first in a two-part series about migrating to DynamoDB by Runscope Engineer Garrett Heel. You can also catch Principal Infrastructure Engineer Ryan Park at the AWS Pop-up Loft on January 26 to learn more about our migration.

At Runscope, we have a small but mighty DevOps team of three, so we’re constantly looking at better ways to manage and support our ever growing infrastructure requirements. We rely on several AWS products to achieve this and we recently finished a large migration over to DynamoDB. During this process we made a few missteps and learnt a bunch of useful lessons that we hope will help you and others in a similar position.

Outgrowing Our Old Database

Our customers use Runscope to run a wide variety of API tests: on local dev environments, private APIs, public APIs and third-party APIs from all over the world. Every time an API test is run, we store the results of those tests in a database. Customers can then review the logs and debug API problems or share results with other team members or stakeholders.

When we first launched API tests at Runscope two years ago, we stored the results of these tests in a PostgreSQL database that we managed on EC2. It didn’t take long for scaling issues to arise as usage grew heavily, with many tests being run on a by-the-minute schedule generating millions of test runs. We considered a few alternatives, such as HBase, but ended up choosing DynamoDB since it was a good fit for the workload and we’d already had some operational experience with it.

Migrating Data

The initial migration to DynamoDB involved a few tables, but we’ll focus on one in particular which holds test results. Take, for instance, a “Login & Checkout” test which makes a few HTTP calls and verifies the response content and status code of each. Every time a run of this test is triggered, we store data about the overall result - the status, timestamp, pass/fail, etc.

Example Test Results table in DynamoDB.

Example Test Results table in DynamoDB.

For this table, test_id and result_id were chosen as the partition key and range key respectively. From the DynamoDB documentation:

To achieve the full amount of request throughput you have provisioned for a table, keep your workload spread evenly across the partition key values.

We realized that our partition key wasn’t perfect for maximizing throughput but it gave us some indexing for free. We also had a somewhat idealistic view of DynamoDB being some magical technology that could “scale infinitely”. Besides, we weren’t having any issues initially, so no big deal right?

Challenges with Moving to DynamoDB

Over time, a few things not-so-unusual things compounded to cause us grief.  

1. Product changes

First, some quick background: a Runscope API test can be scheduled to run up to once per minute and we do a small fixed number of writes for each. Additionally, these can be configured to run from up to 12 locations simultaneously. So the number of writes each run, within a small timeframe, is: 

<number_of_locations> × <fixed>

Shortly after our migration to DynamoDB, we released a new feature named Test Environments. This made it much easier to run a test with different/reusable sets of configuration (i.e local/test/production). This had a great response in that customers were condensing their tests and running more now that they were easier to configure.

Unfortunately this also had the impact of further amplifying the writes going to a single partition key since there are less tests (on average) being run more often. Our equation grew to

<number_of_locations> × <number_of_environments> × <fixed>

2. Partitions

Today we have about 400GB of data in this table (excluding indexes), which continues to grow rapidly. We’re also up over 400% on test runs since the original migration. Due to the table size alone, we estimate having grown from around 16 to 64 partitions (note that determining this is not an exact science).

So let’s recap:

  • Each write for a test run is guaranteed to go to the same partition, due to our partition key
  • The number of partitions has increased significantly
  • Some tests are run far more frequently than others

Discovering a Solution

After examining the throttled requests by sending them to Runscope, the issue became clear. We were writing to some partitions far more frequently than others due to our schema design, causing a temperamentally imbalanced distribution of writes. This is commonly referred to as the "hot partition" problem and resulted in us getting throttled. A lot.

What partitions in DynamoDB look like after imbalanced writes.

What partitions in DynamoDB look like after imbalanced writes.

Effects of the "hot partition" problem in DynamoDB.


Effects of the "hot partition" problem in DynamoDB.

One might say, “That’s easily fixed, just increase the write throughput!” The fact that we can do this quickly is one of the big upshots of using DynamoDB, and it’s something that we did use liberally to get us out of a jam.

The thing to keep in mind here is that any additional throughput is evenly distributed amongst every partition. We were steadily doing 300 writes/second but needed to provision for 2,000 in order to give a few hot partitions just 25 extra writes/second - and we still saw throttling. This is not a long term solution and quickly becomes very expensive.

A DynamoDB table with 100 read & write capacity and 4 partitions. As the number of partitions grow, the throughput each receives is diluted.

A DynamoDB table with 100 read & write capacity and 4 partitions. As the number of partitions grow, the throughput each receives is diluted.

It didn’t take us long to figure out that using the result_id as the partition key was the correct long-term solution. This would afford us truly distributed writes to the table at the expense of a little extra index work.

Balanced writes — a solution to the hot partition problem.

Balanced writes — a solution to the hot partition problem.

Learn More In Person & On the Blog

If you’re interested in learning more about our migration to DynamoDB and hearing from our DevOps team in person, Runscope Principal Infrastructure Engineer Ryan Park will be at the AWS Pop-up Loft in San Francisco on Tuesday, January 26 presenting Behind the Scenes with Runscope—Moving to DynamoDB: Practical Lessons Learned. Make sure to reserve your spot!

In Part 2 of our journey migrating to DynamoDB, we’ll talk about how we actually changed the partition key (hint: it involves another migration) and our experiences with, and the limitations of, Global Secondary Indexes. If you have any questions about what you've read so far, feel free to ask in the comments section below and we're happy to answer them.

This Fortnight in APIs, Release IX

$
0
0
fortnight wheel crop.jpeg

This post is the ninth in a series that collects news stories, helpful tools and useful blog posts from the previous two weeks and is curated lovingly by the Runscope Developer Relations team. The topics span APIs, microservices, developer tools, best practices, funny observations and more.

For when you want microservices without the micromanagement:

Building your app on a microservices architecture style is no small feat. Developing, deploying and orchestrating dozens of moving parts can introduce a lot of complexity and migraines. But before you full throttle reverse back to monolith, learn how to build incredibly scalable and resilient microservices without the operational overhead using AWS Lambda and API Gateway. In his article From Monolith to Microservices, Part 1: AWS Lambda and API Gateway, Tom Bray teaches us how to use Lambda functions (code executed without provisioning or managing servers) as microservices that are triggered through API Gateway.

For when you need to explain to your mom what an API is:

To many of us, APIs have become like air, ubiquitous to the point where we don’t remember what we did without them. In Jennifer Riggins’ latest post, What Does Good API Management Look Like, she provides a basic, concise description of what APIs are (for your non-tech friends) and reminds of just how impactful APIs have become—like the fact that at least 2 billion people are using them every day. The meat of the article is in an interview with Keith Casey, Director of Product at Clarify, on key requirements for good API management.

For when the LTE goes MIA:

In one moment, a 50KB API payload could fly by in milliseconds, and the next moment, that same request could take 7 to 10 seconds to complete. Understanding how mobile networks are performing may be crucial to the end-user experience of your apps. In Monitoring Mobile Network Performance from Jana Technology, learn how to measure mobile network performance, from TTFB (time-to-first-byte) to throughput.

For when you want your documentation turned up to 11:

It’s no secret that API documentation is one of the most important pieces of fostering API adoption, but doing documentation right is a little less straightforward. On the Launch Any blog, James Higginbotham explains how to bring your documentation to the next level, beyond API reference documentation, by answering 10 (OK, 11) key questions.

For when you seek a hypermedia definition you won’t HATE(OAS):  

There are lots of hypermedia definitions and explanations swirling around, but if you want the real deal on how it’s done in practice successfully, check out Keith Casey’s post on the Clarify blog. In Why Hypermedia Makes Sense, Casey explains the who, what, where, why and how of hypermedia. If it doesn’t convince you to start building hypermedia APIs, at least you’ll know how to talk about them.

For when you want the down-low on uploads:

If you’re building an API to handle file uploads, you’ll likely discover the many different design choices—direct file upload, upload by URL reference, resumable uploads, etc. In this article, Phil Sturgeon navigates the sea of uploading options for APIs, and helps you select the best solution for your platform and developers. Spoiler: it’s not multi-part forms.

For when everyone's trying to get Swagger Like Us (cue TI):

Last year, the API specification Swagger moved into the Linux Foundation under the Open API Initiative and was initially called the Open API Definition Format (OADF)—which spun up more questions than answers. Kin Lane kicked off the year with a post to clear up some of this confusion—most importantly, confirmation that Swagger will now be called OpenAPI Spec. For a deep dive into OpenAPI Spec, check out this article in the Nordic APIs blog in which Chris Wood explains how OpenAPI Spec came to be and what the future holds.

For when the chef shares his secret sauce:  

We closed out the year with one of our favorite posts yet, An Inside Look at 7 Standout APIs. Learn straight from some of the best API practitioners out there on how they build better APIs and manage change. Ray Morgan at Stripe discussed how the company introduces major API changes, Matt Bernier at SendGrid talked about creating meaningful developer experiences and Ryan Smith at Chain explains how the company plans for customer onboarding.

Notice something we missed? Put your favorite stories of the past fortnight in the comments below, or email us your feedback!

API Monitoring for API Management: The Ins & Outs

$
0
0

During my years at Mashery, an API management provider, I had the privilege of helping customers launch and scale successful API programs. Now that I'm at Runscope, I want to share my knowledge of how API monitoring can help companies using any form of API management. Including myself, more than 25% of Runscope employees have previously worked at API management companies, so we often find ourselves discussing the ways in which Runscope can help companies who have deployed API proxies, gateways and ESBs. 

API management and API monitoring are two different but complementary solutions that support your APIs:

  • API management will help you stand up your API, allow the intended audience to procure API keys and help you to manage throttling, access controls and more.
     
  • Once you’ve stood up your API, API monitoring keeps a constant eye on the API that your developers and company rely on. It does so by running scheduled tests and alerting you to when an API goes down, is slow or is returning incorrect data so you can solve API problems before they impact your customers.

Most of the time, reaping the benefits of API Management requires the insertion of a reverse-proxy or API gateway into your infrastructure and API call flow. Many of these solutions are designed for scale and reliability, but as most of us in the tech industry know, there can always be a time that something fails.

Here are the ways that you can prevent, identify and solve issues with API proxies and gateways a heck of a lot faster by using Runscope.

Monitoring the Ins & Outs

In order to identify the source of an API issue, keeping an eye on all the segments of an API call is critical. Gateways and proxies, whether they’re in the cloud or on-premises, introduce another component in your infrastructure, and thus another component that needs monitoring.

If your APIs are responding too slowly and/or are returning unexpected status codes or incorrect data, this could affect your mobile clients and integrations, potentially hurting your business.

Do you currently have the insight needed to identify where the problem is occurring? Are you confident that you’ll find the problem before your customers or partners do? 

Here's how you can use Runscope to monitor your proxies: 

api management graphic.png

Step 1: Create tests in Runscope that hit the gateway/proxy. Run them on a frequent schedule. I recommend once-a-minute basic tests that assert on latency, response codes, different HTTP verbs and also validate some expected responses.

Step 2: Create another set of tests which mimic the first tests as closely as possible but direct them to your application server instead of your gateway. You may be able to to skip some authorization and authentication since your gateway likely provides these services.

Note: You’ll most likely want to use Runscope’s 12 global cloud locations as well as the On-Premises Agent in tandem for monitoring both of these scenarios from within and outside of your own network.  The On-Premises agent will execute tests against private and/or unprotected APIs and send results to Runscope’s cloud-based dashboard.

Alerting the Right Team

Option 1: An automated service tells you that something is wrong.

Option 2: Your customer tells you that something is wrong.

Feel free to vote for Option 2, but be prepared for an influx of support tickets that could have been prevented.

Now that you’re running frequent functional tests against different components in your stack, Runscope can notify you and your team when assertions fail before or after hitting your gateway. We’ll even let your team know when and where issues arise via Slack, Hipchat, PagerDuty and more.

Fix It Faster

Reproducing issues is a pain. Have you ever experienced a production issue, tried to get help, and been told, “I don’t see any problems on my side. Can you try reproducing the issue?"

That’s why we have sharable links, which let you solve problems faster within your own team, and even with outside groups. If a scheduled test fails when hitting the API gateway but passes when hitting your services directly, why not send a link of both results directly to your vendor? Getting a clear picture of the request and response should cut out some extra steps, shorten incident time and minimize the amount of effort spent by both you and your vendor.  Everyone wins!

Recap / TL;DR

API gateways and proxies offered by API Management vendors can save your team a huge amount of time and effort, but they can also create another point of failure in your stack. Runscope can help you prevent, identify and solve API problems faster if you have a gateway or proxy in your infrastructure.  How?

Prevent: Run tests on a schedule from multiple locations. Monitor at your gateway as well as at your API source.

Identify: Notifications inform you of problems before your customers find them. Quickly determine where in your stack the problem is getting introduced.

Solve: Inspect the request/response before and after the call hits your gateway. Share the results internally and with your vendors for a faster resolution.

Let us know if you’re using API monitoring in conjunction with your API management solution in the comments and sign up for Runscope for free to start monitoring your APIs—and your API proxies today. I’m happy to help get you started, feel free to email me at jon@runscope.com with questions!

Migrating to DynamoDB, Part 2: A Zero Downtime Approach With GSIs

$
0
0

This post is the second in a two-part series about migrating to DynamoDB by Runscope Engineer Garrett Heel (see Part 1). You can also catch Principal Infrastructure Engineer Ryan Park at the AWS Pop-up Loft today, January 26, to learn more about our migration. Note: This event has passed.

We recently went over how we made a sizable migration to DynamoDB, encountering the “hot partition” problem that taught us the importance of understanding partitions when designing a schema. To recap, our customers use Runscope to run a wide variety of API tests, and we initially stored the results of all those tests in a PostgreSQL database that we managed on EC2. Needless to say, we needed to scale, and we chose to migrate to DynamoDB.

Here, we’ll show how we implemented the long-term solution to this problem by changing to a truly distributed partition key.

Due to a combination of product changes and growth, we reached a point where we were being heavily throttled due to writing to some test_ids far more than others (see more in Part 1). In order to change the key schema we’d need to create a new table, test_results_v2, and do yet another data migration.

Key schema changes required on our table to avoid hot partitions.

Key schema changes required on our table to avoid hot partitions.

A Hybrid Migration Approach

We knew from past experience with taking backups that we’d be dealing with most operations on our entire dataset in days instead of minutes or hours. Therefore, we decided early on to do an in-place migration in the interest of zero downtime.

To achieve this, we employed a hybrid approach between dual-writing at the application layer and a traditional backup/restore.

Approach to changing a DynamoDB key schema without user impact.

Approach to changing a DynamoDB key schema without user impact.

Dual-writing involved sending writes to both the new and the old tables but continuing to read from the old table. This allowed us to seamlessly switch reads over to the new table, when it was ready, without user impact. With dual-writing in place, the only thing left to do was copy the data into the new table. Simple, right?

Backing Up & Restoring DynamoDB Tables

One of the ways that AWS recommends copying data between DynamoDB tables is via the DynamoDB replication template in AWS Data Pipeline, so we gave that a shot first. The initial attraction was that, in theory, you can just plug in a source and destination table with a few parameters and presto: easy data replication. However, we eventually abandoned this approach after running into a slew of issues configuring the pipeline and obtaining a reasonable throughput.

Instead, we repurposed an internal Python project written to backup and restore DynamoDB tables. This project does Scan operations in parallel and writes out the resulting segments to S3. Notably, when scanning a single segment in a thread, we often saw a large number of records with the same test_id, indicating that a single call to the Scan operation often returns results from a single partition, rather than distributing its workload across all partitions. Keep this in mind throughout the rest of this post, as it has a few important ramifications.

The backup and restore process.

The backup and restore process.

The backup went off without a hitch and took just under a day to complete. It’s worth noting that, because of the original problematic partition key, we had to massively over-provision read throughput on the source table to avoid too much throttling. Luckily, cost didn’t end up being a major issue due to the short timeframe and the fact that eventually consistent read units are relatively cheap (think ~$10 to backup our 400GB table). The next step was to restore the data into the new and improved table, however this was not as straightforward due to our use of Global Secondary Indexes.

Impact of Global Secondary Indexes

We rely on a few Global Secondary Indexes (GSIs) on this table to search and filter test runs. Ultimately we found that it was much safer to delete these GSIs before doing the restore. Our issue centered around the fact that some GSIs use test_id as their partition key (out of necessity), meaning that they can also suffer from hot partitions.

We saw this issue come up when first attempting to restore backup segments from S3. Remember the note earlier regarding records within a segment having the same partition key? It turns out that restoring these quickly triggers the original hot partition problem by causing a ton of write throttling—to GSIs this time. Furthermore, a GSI being throttled causes the write as a whole to be rejected, resulting in all kinds of unwanted complications.

It's important to remember that hot partitions can affect GSIs too, including during a restore.

It's important to remember that hot partitions can affect GSIs too, including during a restore.

By creating GSIs after restoring, they are automatically backfilled with the required data. During this process, any throttling that occurs is automatically handled and retried in the background while the table remains in the CREATING state. Doing so ensures that usual traffic to the table will not be affected by the restore.

The backfill approach worked, but unfortunately it took a very long time for a few reasons:

  1. Only one GSI can be created (and backfilled) at a time

  2. The backfill caused hot partitions, slowing everything down significantly

My guess as to why we still saw hot partitions during the backfill is that DynamoDB processes records for the index in the same order they were inserted. So while we’re definitely in a better position by having throttling occur in the background, rather than affecting the live table, it’s still less than ideal. Remember that time isn’t the only penalty here—writes cost 10 times as much as read units.

Aside from dropping the GSIs before the restore, the main thing I’d do differently next time would be to shuffle the data between the backup segments before restoring. Shuffling does require a little effort due in this case to the size of the data (~400GB) not fitting in memory, but would've made a significant difference in avoiding write throttling during the backfill to save time (and money).

Post-Migration Savings & Growth

It’s now been a few months since our last migration and things have been running pretty smoothly with the schema improvements. We were able to save more than $1,000 a month in provisioned throughput by not needing to over-provision for hot partitions and we’re now in a much better position to grow.

It’s safe to say that we learned a bunch of useful lessons from this undertaking that will help us make better decisions when using DynamoDB in the future. Understanding partition behavior is absolutely crucial to success with DynamoDB and picking the right schema (including indexes) is something that should be given a lot of thought.

If you'd like to learn more about our migration, feel free to leave a question in the comments section below, and attend the AWS Pop-up Loft in San Francisco tonight to hear our story in person. [Note: This event has passed.]

This Fortnight in APIs, Release X

$
0
0

This post is the tenth in a series that collects news stories, helpful tools and useful blog posts from the previous two weeks and is curated lovingly by the Runscope Developer Relations team. The topics span APIs, microservices, developer tools, best practices, funny observations and more.

For when you can’t drive 55 (without an API):

It’s not uncommon these days to see product releases accompanied by a fresh new API as part of the package. This week, Uber announced the availability of the UberRUSH API, which allows developers to incorporate the company’s new rush delivery service into apps for companies like Google Express, Rent the Runway and SAP, to name a few. Over the past year, Uber has been warming up to developers, offering public APIs, SDKs and more to get its services integrated in as many apps as possible. This article on The Verge provides a solid overview of the UberRush API.

For when your API docs need their own docs:

There have been a lot of great articles lately on strategies and standards around API documentation, and this fortnight is no exception. Taylor Singletary published Writing Great Documentation based on his experiences at Twitter, LinkedIn and Slack. His advice mirrors basic good writing principles that you should incorporate into your API documentation, like maintaining an active voice and highlighting key content. Use these tips, and both your English teacher and developers will be proud.

On the strategy side, James Higginbotham wrote Building Your API Documentation Strategy for SUCCESS on the LaunchAny blog. SUCCESS spells out all the checkpoints you need to take when writing your docs that are easy to do and pay off in the long run. If you read both of these articles, you’ll see that they’re taking their own advice very well—which means an easy and useful read for you.

For when your favorite schema takes Initiative: 

Choosing an API schema or description language is sometimes like trying to decide which child is your favorite (or least favorite). This child is smart and easy to understand, while this one mows the lawn and does the dishes. If you are caught between selecting Swagger (OpenAPI Specification) and another schema, last week’s announcement that Apiary will be support the OpenAPI Initiative can provide some relief.

For when you want to save money and live better by deploying across multiple clouds: 

Well known for helping their customers save money and live better, Walmart is extending its value promise to a whole new segment—engineering and ops organizations. Earlier this week, Walmart Labs open sourced OneOps, their internal cloud management and application lifecycle management platform for developers, allowing them to test and deploy in a multi-cloud environment, freeing them from being locked into a single cloud provider. We wrote about Walmart in our exploration of retail APIs, and it's exciting to see retailers become more innovative on the technology front.

For when the cloud takes over your water cooler chat:

Chat services like Slack and HipChat are quickly picking up adoption beyond just engineering teams and startups (in fact, they’re Runscope’s two most popular integrations). In this SD Times article, Alex Handy discusses how chat services is also giving rise to ChatOps, or the need for managing the seemingly countless messages and notifications coming in through these services. ChatOps Is Taking Over Enterprises explores the history of ChatOps dating back to the 1990s and how it continues to evolve.

For when you need to hear from microservices practitioners and not just the pundits:

Introducing a new architecture or framework into a company is often less about the technology, and more about the people and culture. This holds true for both the proponents and opponents. In this InfoQ interview with two well-respected consultants and practitioners of microservices and Self-Contained Systems titled Microservices in the Real World, you’ll learn about dealing with common “us vs. them” behaviors when applying DevOps, the difference between microservices and SCS, and the importance of application and system metrics.

For when it’s about the journey and the destination:

Migrating huge sets of data to a new database is no easy feat, particularly when you want to do so transparently in the background with zero downtime. That’s why we wrote about lessons we learned during our recent double migration to DynamoDB in a two-part series. Part 1 discusses lessons in schema design, and Part 2 shows the details of our zero downtime approach and how we used Global Secondary Indexes (GSIs). To help out anyone else migrating to DynamoDB, we wrote a Boto plugin to log errors and send them to Runscope for debugging. Happy migrating!

Notice something we missed? Put your favorite stories of the past fortnight in the comments below, or email us your feedback!


3 Benefits to Including API Testing in Your Development Process

$
0
0

One of the most common ways we see our customers benefiting from API monitoring is in production—making sure those live API endpoints are up, fast and returning the data that’s expected. By monitoring production endpoints, you’re in the loop as soon as anything breaks, giving you a critical head start to fix the problem before customers, partners or end-users notice.

However, we’re starting to see more and more customers create those API tests during the build process in staging and dev environments. As Matt Bernier, Developer Experience Product Manager at SendGrid, says, “We can actually begin testing API endpoints before they’re deployed to our customers, which means that we’re testing to make sure everything is exactly as we’re promising before we deliver it to them.”

Benefits of Incorporating API Testing in Development

Including API tests in your test-driven development (TDD) process provides a host of benefits to engineering teams across the lifecycle that get passed down to customers in the form of better quality services. There are 3 critical ways that your company will benefit from including API tests in your development process:

1. Test Quality

If you wait until after development to build your API tests, you’ll naturally build them to be biased toward favorable test cases. Once an API or piece of software is built, you’re focused on how it’s supposed to perform instead of the other, equally likely scenarios, in which it will fail. Plus, much like iterating on software during development, iterating on API tests will only make them stronger and more comprehensive, which will benefit the team in the long term.

2. Test Coverage

Covering all the bases of potential software failures is a critical component to maintaining quality product and customer trust. API testing during development can reveal issues with your API, server, other services, network and more that you may not discover or solve easily after deployment. Once your software is out in production, you’ll build more tests to account for new and evolved use cases. Those tests, in addition to the ones you built during development, keep you covered for nearly any fail scenario, which keeps QA and customer support teams from being bombarded with support tickets.

3. Test Reuse

One of the best reasons to create API tests in the early stages is the rewards you’ll feel after deployment in that the bulk of your tests are already taken care of. For instance, Runscope allows you to reuse the same tests in multiple environments, duplicate and share tests. Your dev and QA teams build tests and use them in dev and staging environments, then your DevOps teams can reuse those same tests and run them on a schedule in production to monitor those use cases. DevOps then iterates and adds more tests, which can be reused by dev and QA teams when building out new endpoints. Reusing API tests across the development lifecycle facilitates collaboration among teams and provides a more comprehensive and accurate testing canon.

Using API Testing with CI/CD & TDD

You can incorporate API testing into your development process a couple of different ways. Many of our customers include API tests in their continuous integration (CI) and continuous deployment (CD) processes either with trigger URLs or a direct plugin with Jenkins. If an API test fails during CI or CD, the process is stopped and the API issue must be fixed before the build is complete. Including API tests in this process gives engineering and product teams more assurance that all they’ve covered all the bases before releasing product to customers.

You can also build tests specific to an API that’s in development, similar to how you would when building other software in TDD. Test new endpoints as they’re being built in development and staging, then trigger them to run as part of your CI/CD pipeline.

Learn More In a Free Webinar 

API testing is at the core of API monitoring, which is just running on a schedule the tests you create either in development or post-deployment. Building API tests during development of any software or service has far-reaching benefits across teams, all the way down to how your customer experiences the product.

However, API testing isn’t the only thing to consider during API development. We’ll be hosting a free live webinar on Wednesday, February 17 at 10 a.m. PT, 3 Things Nobody Told You About API Development, featuring Phil Sturgeon, author of Build APIs You Won’t Hate. In this webinar, you’ll learn more about how to incorporate API testing during development, and other tips for building better APIs. Attend this webinar and sign up for a free trial of Runscope, and we’ll give you a free copy of Phil’s book! Reserve your spot today.

This Fortnight in APIs, Release XI

$
0
0

This post is the eleventh in a series that collects news stories, helpful tools and useful blog posts from the previous two weeks and is curated lovingly by the Runscope Developer Relations team. The topics span APIs, microservices, developer tools, best practices, funny observations and more.

For when you’re finally able to commit this Valentine’s Day weekend:

If you operate a CI/CD process, you’re likely making multiple commits a day. But how much thought do you put into your commit message? In the article The Art of the Commit, David Demaree explains what goes into a good commit message, and how taking a little extra time to craft a good one that reads like a headline can save you headaches in the long run.

For when it’s February and you still haven’t started on your New Year’s resolutions:

For a “micro” service, it can be an awfully big undertaking to make the move. Fortunately, Matt Ryan’s blog post, Microservices—When to Start, explains how transitioning from a monolith to microservices architecture actually has a few advantages over starting microservices from scratch. If you’re scratching your head over how to get started when you’re sitting on a monolith, this post is for you.

For when a monolith is worth a thousand microservices:

Once you’re ready to start migrating from a monolith to microservices architecture, planning for scale is a critical component, since microservices allow rapid scale. In this article, Vivek Juneja provides diagrams and walk-throughs of architectural patterns and 3 dimensions of scalability, including testing, monitoring and deployment.

For when real beauty is on the outside:

With most technology, the focus tends to be on backend systems and development. In a refreshing turn, this InfoQ article looks at the needs and implications for UI design in microservices architecture. The summary of Stefan Tilkov’s microXchange talk, “Wait what? Our microservices have actual human users?” debunks several assumptions, like “backend for front-end” patterns and that channels matter. He argues that we need to include UI planning as a key part of building out microservices in any organization.    

For when there’s more to testing than meets-the-(C)I:

Testing and monitoring are often discussed in different camps, but they’re more related and intertwined than you think. Our latest blog post explains how incorporating API testing into your development process enables easier monitoring by running all the tests you built (during development) in production. We explore 3 Benefits to Including API Testing in Your Development Process (plus the chance to get a free copy of Phil Sturgeon’s book, Build APIs You Won’t Hate!)

For when you need to teach an old dog new tricks:

Refreshing legacy code and systems will make even the most accomplished developer a little nervous. Fortunately, GitHub recently released Scientist 1.0, an open source Ruby-based tool to mitigate those fears. In this article in The New Stack, TC Currie explains how Scientist works and how you can use it to bring large amounts of old code into new formats confidently.

For when you remember that you need people to build technology: 

It's easy to look at technology improvements or infrastructure change as individual components, but those changes are part of a larger shift that requires an entire team (and organization's) buy-in. Ron Miller explains how "every company is a software company" in Digital Transformation Requires Total Organizational Commitment. This article explores how many companies in the past several years have attempted digital transformation with a "labs" arm of the business, but moving progressive ideas from labs to the greater org is still a challenge requiring more than just engineers to be on board.

Notice something we missed? Put your favorite stories of the past fortnight in the comments below, or email us your feedback!

3 Things Nobody Told You About API Development

$
0
0

Many factors in API development are brought to light after development is done: documentation, testing and caching. However, in order to develop APIs successfully and have an easier time maintaining and iterating on those 3 things, it is critical to include them in the development stage, and not as an afterthought. We recently sat down with Phil Sturgeon, author of Build APIs You Won't Hate, to debunk some of the misconceptions around what to include in development vs. post-production. 

In the video below, Phil explains how and why to include building tests, documentation and caching during the build phase of the API lifecycle. This video covers: 

  • A pragmatic approach to ensuring that your API solves real problems
  • Methods and tools for including documentation, testing and caching in the API development process
  • Basic principles of API design

You can start including API testing in your development process today by signing up for Runscope for free. Easily import your tests from the work you've done in Swagger and experience better test quality and coverage. You can also stay in the loop on discussions around API development by joining Phil's Slack channel

Connect Runscope with xMatters for Automated, Intelligent Communication About API Incidents

$
0
0

When your business runs on mission-critical APIs, it’s important to know when those services are experiencing problems such as degraded performance, data quality, or simply not responding at all. Also, as more and more APIs proliferate to other parts of your application and IT infrastructure, knowing exactly who to notify about API problems depending on when and where they occur becomes even a greater challenge.

Intelligent Communication for API Incidents 

Today we’re announcing a new integration between xMatters and Runscope to combine the power of xMatters’ intelligent communication platform with Runscope’s automated API testing and monitoring solution.

xMatters connects insights from any system to the people who matter in order to accelerate essential business processes. The extensible platform allows you to add a layer of intelligent communication across the entire enterprise with automated, targeted communications ensuring that critical messages and notifications get through to the right people on the job. Runscope monitors APIs for availability and correctness. Combined together with this new integration, service issues detected by Runscope are routed to and handled by your existing xMatters’ communication plan.

Integration in a Snap

Connecting Runscope to xMatters is easy with the new xMatters Integration Platform. Once the xMatters has been configured to accept Runscope API test result webhooks, you just need to enable the webhook notifications in one or more of your Runscope test notifications. The Integration Guide has complete setup details.

With the xMatters integration, notifications about test results can be sent to groups or individuals. Team members can respond with “Rerun”, which will trigger another test run in Runscope and ends the event within xMatters. xMatters can also create FYI events to notify groups or individuals when Runscope detects a Live Traffic Alert. The notification includes a link the recipient can click to view the details and acknowledge the notification.

Let the Right Teams Know About API Issues Today

Runscope customers can begin integrating Runscope monitors with xMatters today. If you aren’t already monitoring your mission-critical APIs, sign up for Runscope for free today and see how you can solve API problems faster with API monitoring connected with xMatters. Learn more about the integration on the xMatters blog and check out the full announcement.

Monitor Your Webhook Workflows with New Incoming Requests for API Tests

$
0
0

API testing and monitoring has traditionally focused solely on outbound requests that simulate what a client of a particular API or endpoint would experience. But APIs are increasingly becoming a two-way conversation, with data flowing into apps asynchronously based on events happening (e.g. GitHub commits, Twilio text messages received, bots receiving requests, etc.). The most popular way this occurs is via a pattern known as webhooks.

Testing a webhooks implementation can be difficult. While you can exercise specific endpoints that consume an event payload (Runscope is great at this), generating test events that mimic real-world conditions is a manual process. What we need is an API testing tool that understands data going out via API calls, but also data coming in via asynchronous callbacks, and the relationship between the two.

Introducing Monitoring and Testing for Webhooks

Today we've added a new feature to our API testing and monitoring lineup that allows you to combine outgoing API calls with incoming callback requests. To understand how this works, let's take a look at a simple example.

Testing Twilio Status Callback URLs

Twilio is an API that allows developers to add communications (voice, text, video) features to their applications. One of the most popular products they offer is their SMS API. When sending SMS, it's really important to know whether or not a message was delivered. The Twilio API offers a "Status Callback URL" that will notify your app of delivery status after you send a message. Delivering a message takes a little bit of time so it's not possible to include a delivery status in the response to the original API call. Instead, you can pass a URL to be notified at asynchronously once the delivery attempt is completed.

To test this scenario, we can create a test that sends an SMS message via an outbound API call, then waits for a callback request containing the delivery status using the new Incoming Request Step type:

Incoming Request Steps can be validated just like the responses of outgoing API calls. In this case, we're checking for a form POST parameter named MessageStatus to make sure the value is correct. You can also check JSON attributes, extract and store values to use in subsequent requests with variables, or write more advanced assertions with JavaScript and the Chai Assertion library.

Testing Conditional Webhooks

Clearbit is one of our favorite new services. We make heavy use of their Enrichment API for qualifying leads. When looking up an email address, sometimes you get an immediate response (if it has been looked up recently) but other times you'll get a callback once the data has been retrieved. Using incoming requests combined with a condition, we can test this scenario as well:

If our initial request returns a 200 OK, we can assert on the response and extract data for subsequent requests. If we get a 202 Accepted back, we know that we'll get the details via the webhook URL and can handle the data there instead.

These are just a couple examples of how you can use Incoming Request steps to verify your webhook and callback workflows. There are many other possibilities including testing bot responses, simple API integration and workflow automation, and much more.

Now Available for All Runscope Accounts

Incoming Request steps are now available for all free and paid Runscope accounts. If you're new to Runscope, be sure to sign up for your free trial to try it out. You'll see the "Add Incoming Step" option at the bottom of the test steps editor. As always, if you have any questions or need help getting started, contact our support team any time.

 

Viewing all 231 articles
Browse latest View live