{"_id":"57882185b008c91900aae9c5","user":"5633ec9b35355017003ca3f2","project":"56be3387be55991700c3ca0d","category":{"_id":"5787da96b008c91900aae865","__v":0,"project":"56be3387be55991700c3ca0d","version":"56be3388be55991700c3ca10","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-07-14T18:31:50.937Z","from_sync":false,"order":3,"slug":"troubleshooting","title":"Troubleshooting"},"parentDoc":null,"version":{"_id":"56be3388be55991700c3ca10","project":"56be3387be55991700c3ca0d","__v":8,"createdAt":"2016-02-12T19:33:28.313Z","releaseDate":"2016-02-12T19:33:28.313Z","categories":["56be3389be55991700c3ca11","57646709b0a8be1900fcd0d8","5764671c89da831700590782","57646d30c176520e00ea8fe5","5764715d4f867c0e002bc8e3","57698fa2e93bfd190028815c","576c2af16c24681700c902da","5787da96b008c91900aae865"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"__v":2,"updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-07-14T23:34:29.482Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"Occasionally our service will return a \"503 Service Unavailable\" error. These occur periodically, on the order of 0.05% of all requests, and can be due to a number of causes.\n\nOne relatively mundane cause is normal system maintenance, during which we may make an index read-only for roughly 1–2 minutes to restart an instance of Solr. This can happen once or twice a month, and should not affect searches.\n\nOther causes are trickier to pin down, due to esoteric combinations of factors such as networking packet loss and JVM garbage collection pauses.\n\nWe have a few recommendations to harden your application in the event of these errors:\n\n1. **Upgrade your index** to a more recent version of Solr. Some users are running on older versions of Solr, which can contribute to these kinds of 503 errors. We strongly recommend that any Solr 3.x index which experiences problems be replaced with a newer index on Solr 4.x.\n\n2. ** Retry your requests.** A 503 error in our systems is almost always intermittent, and may be retried immediately, or multiple times with an exponential backoff. In particular, we recommend that incremental upgrades be processed in a queue, which lends toward easier automatic retries.\n\n3. **Upgrade** to a dedicated cluster. Having dedicated resources available can provide more consistency by mitigating some classes of Solr memory management issues experienced in multitenant shared clusters.\n\n4. **Report the problem.** If your application has experienced a high rate of 503s sustained for more than a few minutes, and we haven't announced a larger outage on [:::at:::websolrstatus](https://twitter.com/websolrstatus), it may be indicative of a larger problem that we need to know about. Let us know your index URL and send us some example requests to support@websolr.com.\n\n## Implementing retries\nThe implementation of retries will vary based on the platform and Solr client. As an example, recent versions of Sunspot include an optional session proxy which can automatically retry these kinds of errors. You could add something like this to a Rails initializer:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"Sunspot.session = Sunspot::SessionProxy::Retry5xxSessionProxy.new(Sunspot.session)\",\n      \"language\": \"ruby\"\n    }\n  ]\n}\n[/block]\n## Queued updates\nUser activity which gradually creates or updates single records over time should have their index updates queued with a system such as Resque. That way, temporary errors such as a 503 are isolated from the everyday operation of the rest of your application, and failed jobs can be more easily retried.","excerpt":"","slug":"503-service-unavailable","type":"basic","title":"503 Service Unavailable"}

503 Service Unavailable


Occasionally our service will return a "503 Service Unavailable" error. These occur periodically, on the order of 0.05% of all requests, and can be due to a number of causes. One relatively mundane cause is normal system maintenance, during which we may make an index read-only for roughly 1–2 minutes to restart an instance of Solr. This can happen once or twice a month, and should not affect searches. Other causes are trickier to pin down, due to esoteric combinations of factors such as networking packet loss and JVM garbage collection pauses. We have a few recommendations to harden your application in the event of these errors: 1. **Upgrade your index** to a more recent version of Solr. Some users are running on older versions of Solr, which can contribute to these kinds of 503 errors. We strongly recommend that any Solr 3.x index which experiences problems be replaced with a newer index on Solr 4.x. 2. ** Retry your requests.** A 503 error in our systems is almost always intermittent, and may be retried immediately, or multiple times with an exponential backoff. In particular, we recommend that incremental upgrades be processed in a queue, which lends toward easier automatic retries. 3. **Upgrade** to a dedicated cluster. Having dedicated resources available can provide more consistency by mitigating some classes of Solr memory management issues experienced in multitenant shared clusters. 4. **Report the problem.** If your application has experienced a high rate of 503s sustained for more than a few minutes, and we haven't announced a larger outage on [@websolrstatus](https://twitter.com/websolrstatus), it may be indicative of a larger problem that we need to know about. Let us know your index URL and send us some example requests to support@websolr.com. ## Implementing retries The implementation of retries will vary based on the platform and Solr client. As an example, recent versions of Sunspot include an optional session proxy which can automatically retry these kinds of errors. You could add something like this to a Rails initializer: [block:code] { "codes": [ { "code": "Sunspot.session = Sunspot::SessionProxy::Retry5xxSessionProxy.new(Sunspot.session)", "language": "ruby" } ] } [/block] ## Queued updates User activity which gradually creates or updates single records over time should have their index updates queued with a system such as Resque. That way, temporary errors such as a 503 are isolated from the everyday operation of the rest of your application, and failed jobs can be more easily retried.