Yes half of the programmers in the world knows Python can't run on multiple cores because of the famous GIL.
Half of the Python backend programmers knows there is the "high-performance networking library" called Gevent.
So, if a multi-core server at your disposal to build a multi-user web service, what are your choices?
I/O bounded
If the service is I/O bounded i.e. light in CPU, possibility involves calling other services, etc. Gevent is right for this. Gunicorn / uWSGI all allows running multiple worker processes; use them to fill up your cores and increase the co-routine count (for uWSGI) to make each core (each Gevent loop) handle enough concurrent clients. This is what Gevent is good at.
CPU bounded
If a lot of processing is involved for some of the requests, don't let them run in the Gevent process. Once a request occupies the CPU, no other greenlets could handle any requests. Usual, simple solution is to let threads do the task, however Python is famous for having a non-functioning multi-threading model. Use GIPC to command a background process to do the heavy lifting so the event loop could continue serve other easy going customers.
If all requests are CPU bounded, better not do ad-hoc forkings. Either distribute the processing to some work farm with Celery or lighter-weight handcrafted solutions using Redis / Thrift, etc, or just use Nginx to reverse proxy to a number of Python nodes, with or without Gevent.
Bonus:
One stupid thing I did to myself:
I was running a kind of reverse proxy app on top of Gevent with monkey-patching. Some of the client requests requires an array of requests to upstream server. I wrote, with my eyes closed, this:
resps = [requests.get(u) for u in upstream_urls]
reduce (proc, resps)
And then I figured I'm dead. The running time of the first one is linear to number of requests where Gevent provided absolutely nothing helpful. Using grequests.map did the trick but I was wondering why.
The solution is so simple. Each "requests.get" call is evaluated by Gevent one-by-one, so Gevent is purposely waiting for the previous to finish before starting the next. It is still switching to other Greenlets if there are other requests at the same time, but this list of request are not sent to Gevent at the same time at all.
resps = grequests.map([grequests.get(u) for u in upstream_urls])
is definitely the simple solution.