Building an API Proxy with Bottle

Recently I found myself with a requirement to obfuscate some API endpoints in a particular project's source code. There were a number of reasons for this, but mainly it was because I was building an integration with a service whose API is proprietary and access to the API documentation was an 'invite only' affair. Eventually I will want to check that code into a public Github repository as a 'portfolio' item. As a way around this, I planned to create a very lightweight app, which I could manage separately from my main repo and run in a container, and have my 'main' application make requests to it. The 3rd party API is not directly queried, only the proxy really knows anything about it, so there is no longer any trace of its details in the repo.  

There are many Python 'micro' frameworks, but I settled on Bottle for a number of reasons:

1) Speed will be key here. I want the extra 'step' to impinge as little as possible on the smooth running of my app. Bottle seems to be one of the quicker Python web frameworks.

2) All the API proxy will need to do is grab info from the incoming requests (namely, headers and query parameters), forward them onto the 'real' API endpoints and then hand the response back. The Bottle docs display it's global Request and Response objects prominently in its documentation. After having skimmed these, I saw how easy it was to both access headers from the incoming request and set attributes on the response. It was settled; I would be using Bottle. 
 

Buidling in Bottle

Here is the official Bottle 'Hello World' example: 

from bottle import route, run, template

@route('/hello/<name>')
def index(name):
    return template('<b>Hello {{name}}</b>!', name=name)

run(host='localhost', port=8080)

Ok so this looks nice and simple. As with many Python web frameworks, routes are declared as decorators on functions which 'handle' the request and return a response.

In my original 'monolithic' app, Here is an example of how I am calling the remote system's API: 

...

def get_single_folder_details(request_user, folder_id: int):

    """
    Gets the details for a single folder from the VDR API

    Makes an http call to the External Service to get a single folder,
    sends the json response to the data parsing function.

    :param request_user: the user logged in during the request
    :param folder_id: int
    :return: a data class for a single folders details

    """
    VDR_BASEURL = get_setting("remote_system_base_url")

    access_token = SocialToken.objects.get(account__user=request_user)
    url = f"{VDR_BASEURL}/rest/v1/folders/{folder_id}"
    headers = {
        "Authorization": f"Bearer {access_token}",
        "Accept": "application/json",
    }

    response = requests.get(url, headers=headers)
    result = parse_get_folder_details(response.json())

    return result

...

A note on the use-case here. The remote system in question is a 'Virtual Data Room' (VDR). If VDR doesn’t make much sense, think DMS...Basically: There are files and folders. This function is calling an API endpoint to get details of a 'Folder' object in the remote system. The aim here would be to have the VDR_BASEURL point at a container running my proxy, with a much more simple path:   

url = f"{PROXY}/folder/{folder_id}"

The request in the code above could be proxied through Bottle like so: 

from bottle import route, run, request, response
import requests

@route("/folder/<folder_id>")
def folder_details(folder_id):

    # Grab the headers necessary to make the call

    token = request.get_header("Authorization")

    accept = request.get_header("Accept")

    headers = {"Authorization": f"{token}", "Accept": f"{accept}"}

    # Use the requests library to actually get the response from the external system

    external_response = requests.get(
        f"https://example.com/rest/v1/folders/{folder_id}",
        headers=headers,
    )

    # ensure we have matching status codes, by directly setting it on the global response object
    response.status = external_response.status_code

    # Directly return the content of the response from the external system. We've already set our
    # status code, so we can just do this and Bottle will take care of the rest (see below) 
    return external_response.content

...

Notice that inside my function, I am not explicitly passing 'request' or 'response' as an argument. Those are the global objects that the Bottle framework is making avaliable to us. request.get_header(), allows us the access the headers, as a dict-like object. The BaseResponse class in bottle has a number of writeable attributes, so we can set them directly.


Bottle also does a lot of heavy lifting depending on what your functions are returning, (see the docs here on Generating Content). Bottle will, in effect, always attempt to 'do the right thing' . If you return a Python dictionary, it will give the client consuming your API a JSON response, etc. Bottle's flexibility in this regard was another reason why I chose it for this project. In the above example we are handing it the bytes from the request library's response object. 


To the client, the result doesn't look as if we've done anything differently from call the 'real' API endpoint.



The notion of Request(s) is beginning to get a bit nebulous here, (Bottle Request? Request Lib? Are there more kinds of Requests?). In an attempt to avoid confusion, I’m going to alias the requests library:

import requests as http_handler

I'm pleased with the simplicity of my function BUT, given that I have to encapsulate about 12 different endpoints for my integration, things are going to become pretty un-DRY, pretty quickly, with regards to header retrieval and response creation. Best to split those off into some utilities:  

def _prepare_request_config(request: request) -> Tuple[Dict, str]:
    token = request.get_header("Authorization")

    accept = request.get_header("Accept")

    headers = {"Authorization": f"{token}", "Accept": f"{accept}"}

    params = request.query_string

    return headers, params


def _prepare_response_config(
        response: response, external_response: http_handler.Response
) -> None:
    response.set_header("Content-Type", external_response.headers["Content-Type"])

    try:

        response.set_header(
            "Content-Disposition", external_response.headers["Content-Disposition"]
        )

    except KeyError:

        pass

    response.status = external_response.status_code

For my use-case I only need to collect query params, but this _prepare_request_config() function could easily be extended to capture the incoming request's body. _prepare_response_config() looks the way it does because I know that for the integration I am building, at some point I am going to be downloading files over HTTP. Note that this doesnt return anything because we dont need it to;  we are mutating the global response object that Bottle has made available to our function. 

 With these changes, the 'get folder info' proxy now looks like this: 

@route("/folder/<folder_id>")
def folder_details(folder_id):
    headers, _ = _prepare_request_config(request)

    external_response = http_handler.get(
        f"https://example.com/rest/v1/folders/{folder_id}",
        headers=headers,
    )

    _prepare_response_config(response, external_response)

    return external_response.content

I think that’s pretty neat! So, I now have a single file, with about 12 of these functions, which all look pretty similar, (save for a different route decorator and external API endpoint). In terms of HTTP methods, the 'route' decorator defaults to 'GET', so for deleting data in the remote system, my functions have an additional keyword argument: 

@route("/soft-delete-folder/<folder_id>", method='DELETE')
def folder_delete(folder_id):
    headers, _ = _prepare_request_config(request)

    external_response = http_handler.delete(
        f"https://example.com/rest/v1/folders/{folder_id}",
        headers=headers,
    )

...

It's worth pointing, however, that there are decorators for all the HTTP verbs, which can be used without the need for the additional argument: 

@delete("/soft-delete-folder/<folder_id>")
def folder_delete(folder_id):

...

I've even managed to do the OAuth2 dance through the proxy. See the example below. The auth() function, which accomdates OAuth2's user authorization step, shows a query param being captured from the incoming request. It also returns Bottle's redirect function, demonstrating the aforementioned flexibility of the framework. 

@route("/auth")
def auth():
    _, params = _prepare_request_config(request)

    return redirect(
        f"https://example.com/authorization?{params}"
    )

Notes on Speed

As mentioned earlier, obviously speed is going to be a consideration here . What was previously an application calling a remote service, is now an application, calling another application, which in turn calls the remote service. It’s a not-insignificant extra step and it will almost certainly add some latency, (look again carefully at the Postman screenshots above). 

In Python web development, async (or async-first) frameworks have rapidly ascended in popularity.  However, there is more than enough scepticism from reputable sources, (two notable examples HERE and HERE), that the asyncio module might not be a panacea for Python's speed-issues. At the very least, its much more complicated that 'async = faster'. For where I am at in my Python journey, I don’t feel as though I've fully wrapped my head around asychio's API, so as things stand, (for me), the challenge of this proxy is a question of 'how fast can I get this synchronous app to run’?

This article subtitled 'A realistic look at Python web frameworks', was incredibly influential to me when I was looking into this. I would encourage anyone to read the whole thing, but the main takeaways are: 

  1. If you want Python itself to be faster, use PyPy. PyPy is an alternative implementation of Python, written in Python, with a view to making it faster. It describes itself as highly compatible, so for an app this simple, it will more than likely to run on PyPy.
  2. Properly configure the WSGI Server in front of the app.

Pypy can be taken care of easily enough. In the Dockerfile for my proxy, I am now pulling from the official PyPy base image:

FROM pypy:3

...

Let's just zoom out for a second with regards to the WSGI server configuration. I know from experience that people coming to web development and Python, (often both for the first time), probably have explained to them the fact that a WSGI server is a requirement, but they perhaps dont get lead to understand how exactly that enables an application to serve more than one request at a time.

Taking a look at Gunicorn, (the most popular WSGI server?), it's docs describes it as having '...a central master process that manages a set of worker processes'. So, you can serve requests in parallel because you have more than one worker, right? Well, it’s not as simple as 'number of workers = the number of requests that can be handled at any one given time'. The docs state: ‘DO NOT scale the number of workers to the number of clients you expect to haveGunicorn should only need 4-12 worker processes to handle hundreds or thousands of requests per second’, so clearly its more nuanced than that

If we go further down the road of Gunicorn config consideration, we learn that each worker itself can contain multiple threads. Threads are a further way of handling concurrency: Not only do you have multiple workers, but each worker process itself can handle multiple requests, because it can have multiple threads.

However, looking at those Gunicorn docs, workers and threads are not 'free'. You can't simply add more of each and expect not to run into issues, because, (depending on the computer hardware), there will always be limits. There is one last step we can take to improve on multiple workers with multiple threads, without eating up resources. Gunicorn can have what is known as an async worker and these uses a special kind of 'pseudo' (or 'green') thread, which is computationally less demanding.

Going back to the docs: ''...any web application which makes outgoing requests to APIs will benefit from an asynchronous worker''. I can find many articles to give consensus on this: HERE, HERE, and HERE. If your program is making network requests, (which is practically the only thing my proxy is doing), at any point where one of these green threads is waiting for the external API to return a response, it can yield its CPU to another thread, so that another request can make progress. This sounds complicated, but the low-level stuff is mostly abstracted away from you. You’ve built the synchronous app and that does its job. Gunicorn, its workers and their pseudo-threads are taking care of handling multiple requests.


My Bottle app now gets run with the async workers like so: 

import gevent.monkey

# This monkey patching step is necessary to tell the synchronous bits of Python to not block, but instead to use 'green-threads' to 'give up' resources whilst waiting, so that other concurrently handled requests can make progress.  

gevent.monkey.patch_all()

...

run(host="0.0.0.0", port=9000, server="gunicorn", workers=4, worker_class="gevent")

(Very Quick) Conclusion

Bottle makes it easy to build proxy services. PyPy and Gunicorn (via Greenlets), make it easy to get your Python web app to run faster, whilst serving more requests! 

This post has very much been written with somebody like me, (at this moment in time), in mind: You've already got a track record of building Python web apps and are now beggining to wrap your head around asyncronous code and concurrency. I've tried to write as high-level an overview as possible, as I often find the landscape confusing. 

I've already referenced them above, but I strongly reccomend reading through: 


 

You may also like: