Thursday, July 31, 2014

Gossip papers

In preparation for my talk at the Cassandra Summit 2014 ("Demystifying Gossip"), I've been going back and reading a lot of existing literature on gossip and epidemic broadcast protocols. What follows is simply a listing of all the papers being carried around in my bag at this time, plus a quick note or two that might be helpful to others. This is not a thoroughly exhaustive listing by any measure, but it mainly grew organically as I started off with one paper, then checked it's references, read those, checked their references, ...

These papers aren't listed in any particular order, and any grouping is largely as I understand the problem at this time. There are other papers I'm sure I'm missing, and I haven't made through all of these yet (and a few I need to reread, a few more times!).

Epidemic Algorithms for Replicated Database Maintenance - one of the first seminal papers describing gossip/epidemic protocols (1989). great read

The Peer Sampling Service: Experimental Evaluation of Unstructured Gossip-Based Implementations - First major paper to spell out the peer service in relation to gossip systems

Bimodal Multicast

HyParView - Hybrid Partial View, example of a peer sampling service. maintain a limited of cluster

Epidemic Broadcast Trees - Plumtree paper. Gossip overlay that uses a spanning tree. implemented in riak 2.0.

Gossip-Based Broadcast - Nice overview of gossip systems written by HyParView/Plumtree authors

Correctness of a Gossip Based Membership Protocol - partial view alg, with a formal proof

SWIM: Scalable Weakly-consistent Infection-style Process Group Membership Protocol

Efficient Reconciliation and Flow Control for Anti-Entropy Protocols - adding anti-entropy to gossip exchanges. basis of cassandra's gossip subsystem

GEMS: Gossip-Enabled Monitoring Service for Heterogeneous Distributed Systems

Spatial Gossip and Resource Location Protocols

Failure Detectors (not specifically gossip, pre se, but close enough for my needs)

A Gossip-Style Failure Detection Service - not dissimilar to cassandra's basic notion of failure detection. based on random gossip and a heartbeat counter

The ϕ Accrual Failure Detector - output a suspicion level of a node based on received heartbeat rate. a modified version is used in cassandra.

The Failure Detector Abstraction

Implementations

The two implementations I know best are those found in cassandra and riak. Serf is an implementation of the SWIM paper in golang, but have never had a chance to check it out.


Wednesday, February 22, 2012

Cassandra Java client libraries

I'm keeping a running list of Cassandra Java client libraries, so it's not always going to be complete. If you have any additions, please ping me!
  • Hector
    • actively developed in the open source community
  • Astyanax
    • developed by Netflix
    • open sourced Feb 2012
  • Cassie 
    • written in Scala
    • developed by Twitter
    • open sourced Feb 2012
  • Cassandra-Hector-Wrapper 
    • A wrapper around Hector to simplify the access / object model
    • developed by zcourts
    • open sourced Feb 2012
  • Skeletor
    • written in Scala
    • A wrapper around Hector

Tuesday, October 19, 2010

Adventures with Squid caching - Vary header

Note: this entry is part of a series on RESTful web services, HTTP, and squid caching, all of which starts with an overview.

Background
The HTTP 1.1 RFC includes a lot of new features to support caching. One of the headers we found interesting was the Vary header. It allows an origin server to tell the caching proxy that the URI will have variants (different encodings, formats, etc) as different representations of the resource. Basically, it allows your service to say, "Hey, for a URI, I support JSON, XML, and PDF, or I can give that to you compressed or not". Of course the formats can be whatever you choose, and further you can vary on whatever else you want. The key is that the Vary header acts as kind of a pointer and says, "I'm listing other headers that are the variants, and they will enumerate the specifics of what's different".

Sounds interesting, but it's damn hard to visualize, so let's make an example. Let's say our service has the following resource URI for customer orders:

/customers/{cust_id}/orders

And that the service can return the order data in different formats (mime-type); for argument's sake we'll support JSON, XML and PDF. Thus, a typical HTTP invocation would look like this (using an abbreviated HTTP message format here):

c>
  GET /customers/123/orders HTTP 1.1
  Accept: application/json

s> 
  200 OK
  Vary: Accept
  Accept: application/json,application/xml, application/pdf
  Cache-Control: max-age=some_value_in_seconds

  { ... json data ...}


Then if another client came along and requested xml data for the same URI, the caching proxy would see that, while it has json in cache (assuming it's not stale), it does not have xml cached, and would forward the request to the origin server. The cache proxy knows to do this because of the Vary header that was returned form the first call which informed it to allow variants on the incoming request Accept header. Those variants were declared in the HTTP response Accept header to the first request. TO illustrate:


c>
  GET /customers/123/orders HTTP 1.1
  Accept: application/xml

s>
  200 OK
  Vary: Accept
  Accept: application/json,application/xml, application/pdf
  Cache-Control=some_value_in_seconds

<xml data ...... />

OK, so now you have two variants of your resource in the caching proxy (squid), but, interestingly, they will expire at different wall clock times. It is important to know that the two will not "invalidate together" when one variant's TTL expires; they essentially live independent lives. Let's take an example (while it's possible for the origin server to return different max-age values for the Cache-Control header, for sanity's sake I'll make it 1 hour across the board.):
  1. request 1 asks for json data at 10:00am; it will be live in cache until 11:00am
  2. request 2 asks for xml data at 10:59am; it will live in cache until 11:59am
If you are worried about freshness of data (at whatever time scale), this is something to be aware of in squid. (Note: I'll be covering resource invalidation in a future installment of this series.)

Lessons Learned
We initially introduced with Vary because it allowed us to introduce new, more compact data representations without disturbing existing clients (that is, forcing them to now parse a new format), but the cache miss rate due to different clients requesting different formats coupled with the cross-country latency became a problem. However, if the latency to generate each variant is nontrivial either because of expensive database reads or, in our case, cross-country/cross-data center calls, each URI variant that may be used is going to amount to callers eating the pricey cost of the cache miss for a specific variant, even if the other variants are fresh in cache. Further, it becomes a burden on the service's engineers to figure out why some teams see "fast responses" (cache hits) and other teams see "your slow-ass service" (cache misses) as service owners need to keep the matrix of "what team consumes which variant" to figure out the rough math of how likely are they to get a cache hit.

If the latency per variant is inexpensive and the load on back-end resources is bearable, using Vary in your HTTP caching can open up some interesting data format possibilities and ways in which you can service clients.








Monday, October 18, 2010

Adventures with Squid caching - overview

I've been knees-deep in the REST services world for most of the last 12 months, and while that domain may be simpler to model and deal with than RPC or SOAP, one of the most compelling advantages is making use of HTTP caching. Instead baking your own caching layer or application, using stateful server sessions, or, worst of all, forcing clients to use a fat-client jar that does it's own caching (yes, I've committed that sin in the distant past), there are several open-source HTTP cache proxy applications that support the HTTP caching spec to differing degrees. The main cache proxies are:
For our needs, we just went with squid as a team member had lots of experience and successful with it at a previous gig.

Of course, we could have used a look-aside cache (something like memcached), but we were interested in having a write-through cache in front of the service to reduce hops and latency (I'll explain below).


Basic premise and early decisions
We were breaking up a massive www application into smaller standalone components/services for, ya know, the usual sorts of reasons: scalability, availability, no single points of failure, and so on. In addition to pulling apart legacy code and rethinking our data persistence options, we also opened up another data center - on the other side of the country. Our data itself wasn't going to be directly replicated to the new data center (not going into any details on this) as there many hands in the pie that could not be easily pulled out, slapped, or cut off. To that end, we had to deal with applications deployed only in the new data center that needed the existing customer data. Enter cross-country latency - umm, yummy.
So now that you know some of the ground rules and constraints of the work, we decided early on to use REST as the premise/framework for building out our services. We wanted to stick as close to the HTTP RFC as possible, within reason, without just inventing a bunch weird-ass rules around using our stuff. And, true to the title of this series, we wanted to use a reverse cache proxy to not only improve the basic performance of the service, but also to help relieve the cross-colo latency.

So, why is this interesting to you? I'm going to record my experiences with REST, squid, and the rest, and I'll point out what worked well, what didn't, and what was freaking painful to figure out, get working, or just plain abandon. I'll cover the following topics:
  • Vary (HTTP header)
  • ETag (HTTP header)
  • cache invalidation via PUT/POST/DELETE
  • tying them all together

Of course, as we went through building out everything in the service, it didn't all fall into neat, even bucket of functionality like the bullet points above. Like everything else when building out new buckets of functionality, it's a little of coding here, a little bit of configuration there, and some ITOps work to bundle it all up.

Oh, yeah, beer was involved, as well. Lots.

Wednesday, September 8, 2010

Emacs and sql-plus fun

Found this nugget to make sql-plus less painful:

On emacs, M-x sql-oracle does the trick

Tuesday, April 27, 2010

REST != !SOAP

OK, so we're heavy in recruitment time here at $DAYJOB, and I see lots of resumes that claim deep REST experience. However, when I probe candidates about building URIs, resources, using HTTP correctly, and so on, they are largely clueless. Probe a bit further and really all they've done is create some simple HTTP endpoint that takes query params or XML in the request body.

Now, there's nothing wrong at all with simple endpoints, andd they're damn useful. However, trying to pass it off as REST is another thing. Those folks just seem to think at "If it's not SOAP, I must be doing REST!".

Wrong. So very wrong. Reminds me of this blog I read recently, and it's associated reddit discussion.


Thursday, April 1, 2010

SQLPlus options

So, for the truly geeky, you can "pass" in parameters to alter the default environment/behavior of sqlplus. The orafaq page helped me figure this out:
  • create a login.sql file for all your commands
  • add the directory to the $SQLPATH env variable (on most client machines, you'll need to create the env var)
  • launch sqlplus, and you should have all your options setup
My current login.sql looks like this:

define _editor=emacs
set timing on
set serveroutput on
set colsep '|'
set linesize 120

Like I said, for the geeky, but damn handy when you don't want to muck with and IDE like TOAD.

Tuesday, January 5, 2010

IBM J9 Diagnostics

Here at $DAYJOB we're using the IBM J9 JVM for our test and production deployments. I've had some performance and memory problems recently, so here's a quick entry about some of the things the J9 JVM provides for diagnotics.

First up: the diagnostics documentation summary.

Next, here's a series of things you can get the J9 JVM to give you:
  • GC log - use the standard -verbose:gc command line option to get an XML file dump of garbage collection activity. You can set the output directory,
  • Heapdump log - generated when you call it explicitly in your app, or more likely, automatically when you have an OutOfMemory condition. *.phd files are created which are not human-readable
  • Javadump log (a/k/a javacore or thread dump) - contains human-readable data on the the operating system, the application environment, threads, stacks, locks, and memory. Creates files starting with javacore...text
  • System dump (a/k/a core dump) - typically only produced when the jvm fails unexpectedly. Log contains active processes, threads, and system memory. Creates files named Snap*.trc