Upgrading your Bonsai cluster to a more powerful one is a straightforward process, designed to minimize downtime and maintain the integrity of your data. This document outlines the steps involved, the expected downtime, and the effort required for a successful upgrade.
Bonsai utilizes a two-phase snapshot and restore process to migrate your data from one hardware cluster to another. A snapshot of your data is taken from the current hardware and restored to the new new hardware. There is no downtime or production impact to your cluster during this phase. This phase of the upgrade can take time, depending on how much data you have on your existing hardware.
Once this is completed, a second snapshot-restore operation is performed. This operation is simply a delta — covering only the data that has been changed since the phase-1 shapshot was taken. Usually this second phase lasts a few seconds, or maybe a minute. During this phase, your cluster is placed into a read-only mode. Search traffic is not impacted, but writes are blocked until the restore completes.
Once this final restore has completed, the cluster will be running entirely on the new hardware, with no data loss.
403 Forbidden
along with a JSON response indicating that Cluster is currently read-only for maintenance. Please try your request again in a few minutes. See [status.bonsai.io](<http://status.bonsai.io/>) or contact [support@bonsai.io](<mailto:support@bonsai.io>) for updates.
. It's essential to handle this gracefully in your application.
To begin, navigate to the cluster’s Plan under Settings on the cluster dashboard.
If you haven’t added billing information yet, please do so at Account Billing. Detailed steps can be found in Add a Credit Card.
Once there is billing information listed on the account, you will see the different options for changing the cluster’s plan. Select the plan you would like to change to and click the green Change to … Plan button. In this example, this `documentation_cluster` is on a Sandbox plan and it will be upgraded to a Standard Micro plan.
A successfully changed plan will notify you at the top with Plan scheduled for update. The Plan has been upgraded to the Standard Micro plan in this example.
Please note that downgrading a plan may fail to update if it puts the cluster in an Overage State for the new plan. For example, downgrading a Standard Micro plan to a Sandbox plan will fail if there is 11 shards on the cluster (Sandbox plan has a shard limit of 10).
Bonsai has created and supports a Terraform Provider to enable creation and management of Bonsai Elasticsearch and OpenSearch Clusters from within your infrastructure as code.
Terraform is an infrastructure as code (IaC) tool designed to define and manage both cloud and on-premises resources through human-readable configuration files. These files can be versioned, reused, and shared, enabling streamlined collaboration and iterative development. Bonsai has created and supports a Terraform Provider to enable creation and management of Bonsai Elasticsearch and OpenSearch Clusters from within your infrastructure as code.
The Bonsai Terraform Provider works by integrating with the Bonsai REST API via the (also newly created) Bonsai Cloud Go API Client. To create resources with the Bonsai API, you'll need a way to authenticate your requests. Currently, both the Bonsai Go API Client and the Bonsai Terraform Provider support HTTP Basic Authentication over TLS, using an API Key and Secret Token combination.
To generate an API Key and Secret Token, head over to your account's API Tokens overview page.
and click the "Generate Token" button.
More in-depth instructions are available at the Bonsai Documentation site!
Next, you'll need to configure the Bonsai Terraform Provider with the API Key and Token pair generated in the previous step.
There are two options here:
Choose an option that best fits your use case, and you're ready to start creating Elasticsearch and OpenSearch clusters!
# provider.tf
# Configure the Bonsai Provider using the required_providers stanza.
terraform {
required_providers {
bonsai = {
source = "omc/bonsai"
version = "~> 1.0"
}
}
}
provider "bonsai" {
# Default: The Bonsai Terraform Provider will fetch the api_key configuration
# value from the BONSAI_API_KEY environment variable.
#
# Uncomment the following line to set directly via variable.
# api_key = var.bonsai_api_key
# Default: The Bonsai Terraform Provider will fetch the api_token configuration
# value from the BONSAI_API_TOKEN environment variable.
#
# Uncomment the following line to set directly via variable.
# api_token = var.bonsai_api_token
}
Configuring a Bonsai Elasticsearch or OpenSearch cluster is easy, we only need to make 4 decisions, reflected as Terraform Resource Attributes for our cluster:
Details about the plans, spaces, and releases available to your account are available either within the Bonsai.io control plane, via the API (we recommend using the Bonsai Cloud Go API Client), or via the Terraform Data Sources.
Here's an example of listing all available Plans, Releases, and Spaces.
Related note! You really only need to output the bonsai_plans data list, as each Plan object will embed a list of the spaces it's available in, and the various Elasticsearch/OpenSearch releases that the plan supports.
# data.tf
// Fetch all Available Bonsai Plans
data "bonsai_plans" "list" {}
// Fetch all Available Bonsai Spaces
data "bonsai_spaces" "list" {}
// Fetch all Available Bonsai Releases
data "bonsai_releases" "list" {}
// Output collected data
output "bonsai_spaces" {
value = data.bonsai_spaces.list
}
output "bonsai_release" {
value = data.bonsai_releases.list
}
output "bonsai_plans" {
value = data.bonsai_plans.list
}
With that output, we can create an optimized OpenSearch cluster that matches our needs.
Related note! We've built an awesome (if we do say so ourselves...) interactive tool to help you estimate the best plan for your use-case! We're also always happy to help your team navigate the sizing process, reach out to us via the Support button in your console!
# main.tf
resource "bonsai_cluster" "test" {
name = "Terraform Created Cluster"
plan = {
slug = "sandbox"
}
space = {
path = "omc/bonsai/us-east-1/common"
}
release = {
slug = "opensearch-2.6.0-mt"
}
}
And finally, we'll configure our infrastructure data output, so that we can reference our newly created cluster directly in the rest of our infrastructure.
Related note! If you're wondering what the sensitive = true configuration does below, it's required because the cluster's user and password are treated as sensitive outputs by the provider.
If you don't mark their output as sensitive, you'll receive an error to reduce the risk of accidentally exporting sensitive information as plain text!
We also recommend encrypting your Terraform state at rest, and if possible, before storage.
output "bonsai_cluster_id" {
value = bonsai_cluster.test.id
}
output "bonsai_cluster_name" {
value = bonsai_cluster.test.name
}
output "bonsai_cluster_host" {
value = bonsai_cluster.test.access.host
}
output "bonsai_cluster_port" {
value = bonsai_cluster.test.access.port
}
output "bonsai_cluster_scheme" {
value = bonsai_cluster.test.access.scheme
}
output "bonsai_cluster_url" {
value = bonsai_cluster.test.access.url
}
output "bonsai_cluster_slug" {
value = bonsai_cluster.test.slug
}
output "bonsai_cluster_user" {
value = bonsai_cluster.test.access.user
sensitive = true
}
output "bonsai_cluster_password" {
value = bonsai_cluster.test.access.password
sensitive = true
}
output "bonsai_cluster_state" {
value = bonsai_cluster.test.state
}
output "bonsai_cluster_stats" {
value = bonsai_cluster.test.stats
}
output "bonsai_cluster_plan" {
value = bonsai_cluster.test.plan
}
output "bonsai_cluster_release" {
value = bonsai_cluster.test.release
}
output "bonsai_cluster_space" {
value = bonsai_cluster.test.space
}
For additional details on managing your Bonsai resources with Terraform, check out our blog post, "Introducing Bonsai's Terraform Provider for Elasticsearch and OpenSearch".
Check out the Provider on GitHub and the official HashiCorp Terraform Registry!
One of the biggest hurdles a search developer comes across is how to get data from one cluster into a new one. In a perfect world we would have fast and reliable reindexing scripts to quickly teardown and/or rebuild indices. A good example of this pattern is in the elasticsearch-rails gem’s import tasks. See also a more in-depth example of an Indexer class in the search for Jekyll gem, searchyll.
Sometimes the best case is not always possible, either from accumulated tech debt or contextual constraints. For those in on our Sandbox and Standard plans, this problem is compounded in that in an effort to keep these plans accessible, the Snapshot API is not available on demand. Read more in our write up on this here: https://bonsai.io/docs/snapshots-on-bonsai. Particularly for those on non-production plans such as our Sandbox plan, backups aren’t taken regularly. In this case, what options are available? Let’s explore a couple of strategies.
There are two solutions to reindexing and/or migrating your cluster in a situation where both the Snapshots API isn’t available. The first is to use the elasticsearch-dump library, and the second is to manage it with a custom solution. Regardless of which way you chose, you’ll need to follow this larger process:
elasticsearch-dump is a mature javascript library that has been around through nearly every release of Elasticsearch. It can download data and mappings, migrate between clusters directly, and do all sorts of imports and exports necessary for the search engineer’s workflow.
The process for getting started is simple:
npm install elasticdump
Here’s an example of what a migration might look like:
# Backup index data to a file:
elasticdump \
--input=https://key:secret@fir-123.us-east-1.bonsaisearch.net:443/my_index \
--output=/data/my_index_mapping.json \
--type=mapping
# Index the data into your cluster with the file:
elasticdump \
--input=/data/my_index.json \
--output=https://key:secret@fir-123.us-east-1.bonsaisearch.net:443/my_index \
--type=data
You’ll need to use your cluster credentials to access your index from a terminal session. See our docs on Cluster Credentials here: https://bonsai.io/docs/credential-management.
Much of what elasticdump does can be manually written if necessary, using curl or whatever language you prefer. For example, downloading mappings can be done using curl:
curl -XGET "https://key:secret@fir-123.us-east-1.bonsaisearch.net:443/_mapping?pretty=true" > mappings.json
And later, with a new cluster, you can PUT your new mappings to its corresponding index:
curl -XPUT "https://key:secret@fir-123.us-east-1.bonsaisearch.net:443/index_name/_mapping" \
-H 'Content-Type: application/json' \
-d @mappings.json
It’s important to note that the downloaded mappings will have to be edited or pieced apart to PUT to the new indices. In the case of managing your reindex yourself, either dump the index data with elasticdump above, or create scripts to reindex straight from your database.
Depending on how many versions you are upgrading you’ll need to navigate breaking changes between versions, like the drop of _doc types in v6.x. There is extensive coverage of breaking changes in the Elasticsearch documentation. See also our guides on moving from major versions:
We’ve seen it all and are here to help. Please reach our to support@bonsai.io and we’ll point you in the right direction. Cheers!
Bonsai Elasticsearch can be removed via the command line or Heroku’s app Dashboard.
This will destroy all associated data and cannot be undone!
The Bonsai Add-on can be removed from the application via Heroku’s command line tool:
<div class="code-snippet-container">
<a fs-copyclip-element="click" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this">heroku addons:destroy bonsai
----- Removing bonsai from sharp-mountain-4005... done, v20 (free)</code></pre>
</div>
</div>
This will destroy the cluster and all of its data instantly.
To remove the Bonsai Add-on from the Heroku Dashboard, log in to your Heroku account and click on the app the Add-on is attached to. In this example, that Add-on is <span class="inline-code"><pre><code>mycoolelasticsearchapp</code></pre></span>:
Next, click on the Resources tab:
Find the Bonsai Add-on in the list of application Add-ons, then click the carot menu:
Select “Delete Add-on” from the list of options. You will need to type in the app’s name to confirm the removal of the Bonsai Add-on:
Click on “Remove Add-on” to complete the process.
When it’s time to scale, you can easily upgrade your shared plan, or migrate to a dedicated cluster.
Most plan changes take effect instantly. However, if you’re trying to upgrade or downgrade across an architecture class, there will be a delay in processing.
Updating your cluster plan with Heroku is fairly simple. You can do this in either of two ways:
You can view all of our available plans for Heroku users on the Bonsai Heroku Add-on page:
From there, choose the plan you’d like to change your cluster to and note the plan slug. For example, our <span class="inline-code"><pre><code>Standard SM</code></pre></span> has a plan slug of <span class="inline-code"><pre><code>standard-sm</code></pre></span>, our <span class="inline-code"><pre><code>Private Compute LG</code></pre></span> has a slug of <span class="inline-code"><pre><code>private-compute-lg</code></pre></span>, etc. Open up your project in a terminal and run:
If you have several applications with Bonsai Add-ons, you can specify which one you would like to upgrade or downgrade with the <span class="inline-code"><pre><code>-a</code></pre></span> flag:
<div class="code-snippet-container"><a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">$ heroku addons:upgrade bonsai:standard-sm -a mycoolelasticsearchapp</code></pre>
</div>
</div>
Log into your Heroku account and open your app dashboard. In this example, the app is called <span class="inline-code"><pre><code>mycoolelasticsearchapp</code></pre></span>:
Click on the Resources tab to view your Add-ons:
You’ll see a list of your Add-ons, with a carot menu on the far right side.
Clicking on the carot icon will open up a dropdown menu. Select <span class="inline-code"><pre><code>Modify Plan</code></pre></span> to open a modal, and you can choose a new plan for your cluster:
Heroku handles all of the billing, and prorates by the second.
A contrived example: a customer signs up for a $50/mo plan at 00:00:00, then decides to upgrade to a $150 plan at 00:10:30, then downgrades back down to a $50 plan at 01:00:00, then destroys it at 02:45:00. The customer’s bill would be roughly calculated as:
This amount would be added to the customer’s next invoice from Heroku.
Bonsai offers two main architecture classes: multi-tenant and single tenant. The multi-tenant class – sometimes called “shared” – is designed to allow clusters to share hardware resources while still being sandboxed from one another. This allows us to provide unparalleled performance per dollar in a way that’s also extremely affordable.
The single tenant class – sometimes called “dedicated” – maps one cluster to a private set of hardware resources. Because these resources are not shared with any other cluster, single tenant configurations provide maximum performance for a slightly higher price.
Plan changes within the multi-tenant class take place instantly. However, moving a cluster from one class to another will take some time. This is because data will need to be moved across a network boundary and search resources may need to be created.
In other words, if a customer upgrades from a “Shared” plan to a “Dedicated,” the Bonsai app will need to provision and configure new private servers, wait for them to come online and pass health checks, then migrate their cluster’s data out of the multi-tenant class and into their new single tenant class.
Alternately, if a customer downgrades from a “Dedicated” plan to a “Shared” plan, a data migration will need to be performed, and the old servers torn down.
The time required for these migrations can vary. If you have questions or concerns, please send us a note at support@bonsai.io
We offer a free Hobby plan for development and testing on Heroku. A list of all available plans and capacities can be found at https://elements.heroku.com/addons/bonsai.
There are two ways to add Bonsai to your Heroku app:
1. Through the Heroku CLI tool
2. Through the Heroku app dashboard
You can add Bonsai to your app with this command:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">$ heroku addons:add bonsai
----- Adding bonsai to some-appname-4005... done, v18 (free)</code></pre>
</div>
</div>
You can verify that the operation was successful by running <span class="inline-code"><pre><code>heroku config:get BONSAI_URL</code></pre></span>:
<div class="code-snippet-container">
<a fs-copyclip-element="click-3" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-3" class="hljs language-javascript">$ heroku config:get BONSAI_URL
----- http://username:password@redwood-12345.us-east-1.bonsai.io/</code></pre>
</div>
</div>
If this value is null, then Bonsai was not properly added to your account. Try again and look for error messages. If you still have problems, give us a shout.
Once the Add-on has been successfully added, you can check on the status of your cluster by running <span class="inline-code"><pre><code>$ heroku addons:open bonsai -a</code></pre></span>
Bonsai supports multiple search engines. The default is Elasticsearch, which will be used if nothing is specified on creation. Users can specify which search engine to use with the <span class="inline-code"><pre><code>--engine</code></pre></span> parameter. To use it on the command line, you would use something like this:
<div class="code-snippet-container">
<a fs-copyclip-element="click-4" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-4" class="hljs language-javascript">heroku addons:create bonsai:[plan] [-a APP_NAME] [--engine=elasticsearch]</code></pre>
</div>
</div>
Allowed values are: "elasticsearch" and "opensearch".
You have a fair amount of flexibility to choose the search engine version your cluster can run. There is a command line flag for specifying which version to use. This flag is called <span class="inline-code"><pre><code>--version</code></pre></span>, and it can be invoked in a couple different ways. To use it on the command line, you would use something like this:
<div class="code-snippet-container">
<a fs-copyclip-element="click-5" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-5" class="hljs language-javascript">heroku addons:create bonsai:[plan] [-a APP_NAME] [--version=X.Y]</code></pre>
</div>
</div>
Bonsai only supports certain versions of a given search engine, so users cannot provision any version. If you request a version that is not available, Bonsai will default to the latest available version.
There is a list of available versions documented here: Supported Elasticsearch Versions. In short, the options available will be determined by three factors:
The <span class="inline-code"><pre><code>version</code></pre></span> and <span class="inline-code"><pre><code>engine</code></pre></span> parameters also work in your app.json, if you use PR apps and Heroku pipelines:
<div class="code-snippet-container">
<a fs-copyclip-element="click-6" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-6" class="hljs language-javascript">{
"name": "Small Sharp Tool",
"description": "This app does one little thing, and does it well.",
"keywords": [
"productivity",
"HTML5",
"scalpel" ],
"addons" : [
{
"plan": "bonsai:hobby",
"options": {
"version": "6.5.4",
"engine": "elasticsearch"
}
}
]
}</code></pre>
</div>
</div>
To view a list of our current supported regions, see our documentation here:
For further help with your app.json and addons, see Heroku’s documentation here:
To add Bonsai through the Heroku UI, open up your application in the Heroku dashboard
Click on either the Resources tab, or the “Configure Add-ons” link. This menu will have a search bar for various Add-ons. Begin typing “bonsai,” and the autocomplete will find the Bonsai Elasticsearch Add-on:
Click on the Add-on to add it to your application. This will bring up a new screen where you will be able to select a payment plan for your new cluster. See the Bonsai Heroku Add-ons page for details about what each plan offers.
When you are ready, click on “Provision.” Your new cluster will be instantly created. Your dashboard will now show this:
When Bonsai is added to your application, a new environment variable called BONSAI_URL is created and initialized with the URL to your cluster. This is the URL that you will need in order to interact with your cluster.
Your application should be configured to read this URL directly from the environment, and should not be hard-coded or shared with others. If you would like to confirm that Elasticsearch is up and running, you can retrieve the URL by clicking no the Settings tab in the Heroku Dashboard.
There will be a section called “Config Vars”:
Click on “Reveal Config Vars” to see your environment variables. This will reveal the value of each variable like so:
You can copy the contents of the BONSAI_URL to your clipboard and paste it into a browser or curl command to see the response from Elasticsearch:
<div class="code-snippet-container">
<a fs-copyclip-element="click-7" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-7" class="hljs language-javascript">$ curl https://username:password@somehost-1234567.us-east-1.bonsai.io
{
"name" : "PvRcoFq",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "DNlbVYS0TIGYwbQ6CUNwTw",
"version" : {
"number" : "5.4.3",
"build_hash" : "eed30a8",
"build_date" : "2017-06-22T00:34:03.743Z",
"build_snapshot" : false,
"lucene_version" : "6.5.1"
},
"tagline" : "You Know, for Search"
}</code></pre>
</div>
</div>
If you’re seeing this message, then you’re not including the correct authentication credentials in your request. Double check that the request includes the credentials shown in your dashboard and try again.
Because the URL includes authentication information, anyone with the fully-qualified URL can access your cluster. Treat the URL like a password: don’t hard-code it into applications, don’t check it into source control, and for Pete’s sake, never ever paste it into a StackOverflow question. If you need to share your cluster URL with us for support purposes, the auth-less host name is all you need:
Bad: “My cluster is https://username:password@somehost-1234567.us-east-1.bonsaisearch.net”
Good: “My cluster is somehost-1234567”
If your URL is leaked somehow, you can regenerate the credentials.
Adding Bonsai to your app automatically creates a cluster for you. It also adds a variable called <span class="inline-code"><pre><code>BONSAI_URL</code></pre></span> to your application’s environment, which will contain the canonical URL and credentials to your cluster. This automation makes it fast and easy to get started with your new cluster.
In the off chance that Bonsai (and much of the internet with it) experiences an entire loss of an AWS EC2 region, all of your cluster’s data is maintained in AWS’s S3 system, which has a reliability guarantee of 99.99% uptime and 99.999999999% durability.
If such a failure happens, Bonsai’s staff will work with your team to understand where you will be relocating your application and can then initiate a restore process into a cluster in the same AWS Region while maintaining your existing DNS connections. Alternatively, clusters on an Enterprise plan have the option to re-provision to a nearby AWS Region.
An event like this is handled as a Severity 1 incident.
How much time will it take to restore?
There are several factors to calculate the time to restore: lead time for a support request or Bonsai’s internal alerting system to our Platform team, plus primary data restoration time from the AWS S3 system.For example, recovering 1 TB of primary data at 1,000 MB/second would take just over two hours. Performance may vary. With 5 TB of primary data, that can approximately take over 10 hours to restore.
All production Bonsai Clusters are deployed to minimum of three nodes for redundancy and to prevent stalemates in leadership election. Each node in the cluster will be deployed to a separate AWS Availability Zone, giving us data center isolation as well.
When a Bonsai cluster does experience a node loss, Elasticsearch and OpenSearch will automatically reroute the primary and replica shards to machines that are up and running. In the background, AWS Auto Scaling Groups will immediately begin spinning up the replacement instance that will auto-bootstrap into your configured Elasticsearch or OpenSearch configuration and version. Once the node has successfully provisioned, it will join the cluster, and then Elasticsearch or OpenSearch will offload the relocated shards back to the empty machine.
An event like this is handled as a Severity 1 incident.
A Bonsai cluster that experiences a complete loss of two AWS data centers does represent downtime for a cluster until the primary shards are restored on the last remaining node. To mitigate this downtime on an Enterprise cluster, we can discuss a setup that includes multi-region deployment.
All production Bonsai Clusters are deployed to minimum of three nodes for redundancy and to prevent stalemates in leadership election. Each node in the cluster will be deployed to a separate AWS Availability Zone, giving us data center isolation as well.
A Bonsai cluster could experience a complete loss of one AWS data center, and the cluster will still continue to operate. This makes Bonsai clusters extremely fault-tolerant.
When a Bonsai cluster does experience a node loss, Elasticsearch and OpenSearch will automatically reroute the primary and replica shards to machines that are up and running. In the background, AWS Auto Scaling Groups will immediately begin spinning up the replacement instance that will auto-bootstrap into your configured Elasticsearch or OpenSearch configuration and version. Once the node has successfully provisioned, it will join the cluster, and then Elasticsearch or OpenSearch will offload the relocated shards back to the empty machine.
An event like this is handled as a Severity 1 incident.
The Bonsai cluster’s availability will not be impacted by these two changes. Ideally, these are scheduled during off-peak hours for your cluster. Our operators are notified if something were to go awry during the process.
There are 2 options for a minor version upgrade.
Option 1 - Enterprise Plans
Bonsai clusters on Enterprise plans can send in a request to support@bonsai.io for our team to perform an in-place minor version upgrade on your behalf. The impact of an in-place minor version upgrade is the same as a rolling restart: there will be a few minutes of read-only mode as it restarts.
Option 2 - Non-Enterprise Plans
Check out our documentation on Upgrading Major Versions that we also recommend for minor version upgrades. By following our variation of a blue-green strategy outlined in the documentation and assuming you can pause or buffer your updates:
There are 2 options for a major version upgrade.
Option 1 - Enterprise Plans
Schedule a time with our team for us to perform a snapshot-restore process to upgrade a major version. During the process, you can expect a short period (depending on your cluster’s disk capacity) of read-only mode. In most cases it takes a few seconds.TitanIAM users will find that it handles retries, so as far as their application is concerned, this would be close to zero-downtime.
The read-only mode would end with an atomic update to the routing layer, sending incoming traffic to the new hardware. This gives our team a fallback path if something goes awry. However, any writes applied between the routing update and fallback are simply lost.
Option 2 - Non-Enterprise Plans
Check out our documentation on Upgrading Major Versions. By following our recommended variation of a blue-green strategy outlined in the documentation and assuming you can pause or buffer your updates:
No. HTTP Pipelining is not supported by Bonsai's platform, and instead Bonsai supports HTTP 2 instead.
The Groovy scripting language was introduced in Elasticsearch in version 1.4 as a replacement for MVEL. This replacement was supposed to address a number of security vulnerabilities in MVEL (among other things). However, Groovy also introduced some serious vulnerabilities that led to some high profile attacks.
While those specific vulnerabilities were ultimately patched, the decision was made that Groovy was not a safe option for multitenant configurations. As a result, Groovy scripting is only enabled for single tenant plans.
If your app returns errors, containing <span class="inline-code"><pre><code>ExpressionScriptCompilationException</code></pre></span>, it’s likely that you need Groovy Scripting enabled on your cluster. This is a feature available on Dedicated and Enterprise plans.Users with a Business or Enterprise plan should contact us to get groovy enabled. Users on our multitenant plans can still use dynamic scripting with the faster (but somewhat limited) Lucene Expressions language.
Yes! CORS is enabled for all clusters by default. Users will not need to perform any additional configuration on their clusters in order to set it up.
Do you need to test a certain analyzer or a new Elasticsearch feature? Testing locally is usually the fastest way to make iterative changes before pushing them to staging or production. Please download an Elasticsearch version that is compatible with your plan. Find your OS system below, and follow the instructions to get Elasticsearch running locally and connect to it.
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">$ brew install elasticsearch</code></pre>
</div>
</div>
Windows users can download Elasticsearch as a ZIP file. Simply extract the contents of the ZIP file, and run <span class="inline-code"><pre><code>bin/elasticsearch.bat</code></pre></span> to start up an instance. Note that you’ll need Java installed and configured on your system in order for Elasticsearch to run properly.
Elasticsearch can also be run as a service in Windows.
There are many Linux distributions out there, so the exact method of getting Elasticsearch installed will vary. Generally, you can download a tarball of Elasticsearch, and extract the compressed contents to a folder. It should have all of the proper executable permissions set, so you can just run <span class="inline-code"><pre><code>bin/elasticsearch</code></pre></span> to spin up an instance. Note that if you’re managing Elasticsearch in Linux without a package manager, you’ll need to ensure all the dependencies are met. Java 7+ is a hard requirement, and there may be others.
Some distributions have preconfigured Elasticsearch binaries available through repositories. Arch Linux, for example, offers it through the community repo, and can be easily installed via pacman:
<div class="code-snippet-container">
<a fs-copyclip-element="click-3" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-3" class="hljs language-javascript">$ sudo pacman -Syu elasticsearch
</code></pre>
</div>
</div>
This package also comes with a systemd service file for starting/stopping Elasticsearch with <span class="inline-code"><pre><code>sudo systemctl elasticsearch.service</code></pre></span>.
One caveat with Arch: packages are bleeding edge, which means updates are pushed out as they become available. Bonsai is not a bleeding edge service, so you’ll need to be careful to version lock the Elasticsearch package to whatever version you’re running on Bonsai. You may also need to edit the PKGBUILD and elasticsearch.install files to ensure you’re running the same version locally and on Bonsai.
Other distros can use the DEB and RPM files that Elasticsearch offers on the download page. Debian-based Linux distributions can use <span class="inline-code"><pre><code>dpkg</code></pre></span> to install Elasticsearch (note that this doesn’t handle configuring dependencies like Java):
<div class="code-snippet-container">
<a fs-copyclip-element="click-4" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-4" class="hljs language-javascript"># Update the package lists
$ sudo apt-get update
# Make sure Java is installed and working:
$ java -version
# If the version of Java shown is not 7+ (1.7+ if using OpenJDK),
# or it doesn't recognize java at all, you need to install it:
$ sudo apt-get install openjdk-7-jre
# Download the DEB from Elasticsearch:
$ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-X.Y.Z.deb
# Install the DEB:
$ sudo dpkg -i elasticsearch-1.7.2.deb
</code></pre>
</div>
</div>
This approach will install the configuration files to <span class="inline-code"><pre><code>/etc/elasticsearch/</code></pre></span> and will add init scripts to <span class="inline-code"><pre><code>/etc/init.d/elasticsearch</code></pre></span>.
Elasticsearch does provide an RPM file for installing Elasticsearch on distros using rpm:
<div class="code-snippet-container">
<a fs-copyclip-element="click-5" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-5" class="hljs language-javascript"># Download the package
$ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-X.Y.Z.rpm
# Install it
$ rpm -Uvh elasticsearch-X.Y.Z.rpm
</code></pre>
</div>
</div>
<span class="inline-code"><pre><code>rpm</code></pre></span> should handle all of the dependency checks as well, so it will tell you if there is something missing.
Once you have Elasticsearch installed and running on your local machine, you can test to see that it’s up and running with a tool like curl. By default, Elasticsearch will be running on port 9200. Typically the machine will have a name like <span class="inline-code"><pre><code>localhost</code></pre></span>. If that doesn’t work, you can always use the machine’s local IP address (typically 127.0.0.1).
The <span class="inline-code"><pre><code>curl</code></pre></span> request and Elasticsearch response should look something like this:
<div class="code-snippet-container">
<a fs-copyclip-element="click-6" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-6" class="hljs language-javascript">curl localhost:9200/
{
"name" : "KLJhbnj",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "HLUKJBjMGJHFIKUIpjPJuREg",
"version" : {
"number" : "6.0.1",
"build_hash" : "e123a8",
"build_date" : "2019-01-22T00:34:03.743Z",
"build_snapshot" : false,
"lucene_version" : "6.5.1"
},
"tagline" : "You Know, for Search"
}
</code></pre>
</div>
</div>
If you see this response, then your local Elasticsearch cluster is up and running! If not, review the documentation for getting Elasticsearch up and running on your operating system and try again.
Once Elasticsearch is running locally, you can configure a local instance of your application to connect to <span class="inline-code"><pre><code>localhost:9200</code></pre></span> (this is probably the default for your Elasticsearch client anyway). Now you can test out your application’s Elasticsearch integration locally!
The information provided in this section pertains to Bonsai clusters which have been provisioned on public networks (accessible across the Internet). Bonsai clusters that are provisioned in a Vault or Heroku Private Space can not be reached in this way.
This is by design -- why put a cluster on a private network if you want to access it from anywhere? Connecting to Bonsai clusters in private networks will require a user to establish a remote connection to the VPC before being able to interact with Elasticsearch.
Every Bonsai cluster is created with a unique URL designed for secure, authenticated access to an Elasticsearch cluster. This URL allows a wide array of platforms and application clients to communicate with Elasticsearch. This URL looks something like this:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">https://a1b2c3d4e:5f6g7h8i9@somehost-1234567.region-x-y.bonsaisearch.net
</code></pre>
</div>
</div>
This URL has the following parts:
Let's examine these in greater detail.
All Bonsai clusters default to secure HTTPS using recent versions of TLS for encrypted communication. These connections should use port 443, the standard for SSL/TLS.
We also support unencrypted HTTP access over ports 80 and 9200.
You can see a table describing the protocols and ports available here:
<table>
<thead>
<tr>
<th>Port</th><th>Protocol</th><th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>80</td><td>HTTP</td><td>Unencrypted</td>
</tr>
<tr>
<td>443</td><td>HTTPS</td><td>Default; recommended</td>
</tr>
<tr>
<td>9200</td><td>HTTP</td><td>Unencrypted</td>
</tr>
<tr>
<td>9300</td><td>Elasticsearch Native Binary Protocol</td><td>Not supported</td>
</tr>
</tbody>
</table>
Unfortunately we do not support the native binary protocol on 9300 at this time. If your application strongly depends on the binary protocol, please contact our sales team at info@bonsai.io to design a custom cluster deployment.
When your cluster is created, it is provided with a randomly generated set of credentials. In the example URL above, the API Key is <span class="inline-code"><pre><code>a1b2c3d4e</code></pre></span> and the API Secret is <span class="inline-code"><pre><code>5f6g7h8i9</code></pre></span>. These are often supplied to various HTTP or Elasticsearch clients as the HTTP username and password, and encoded according to the Basic Authorization scheme. (Cf. RFC 2617 Section 2.)
The cluster credentials are not the same as what you would use to log into your Bonsai/Heroku account. These are randomly generated when the cluster is created.
If you're seeing this message, then you're not including the correct authentication credentials in your request. Double check that the request includes the credentials shown in your dashboard and try again.
Because the URL includes authentication information, anyone with the fully-qualified URL can access your cluster. Treat the URL like a password: don't hard-code it into applications, don't check it into source control, and for Pete's sake, never ever paste it into a StackOverflow question. If you need to share your cluster URL with us for support purposes, the auth-less host name is all you need:
Bad: "My cluster is https://username:password@somehost-1234567.us-east-1.bonsaisearch.net"
Good: "My cluster is somehost-1234567"
If your URL is leaked somehow, you can regenerate the credentials. See the dashboard documentation for more information.
Each Bonsai cluster has a unique hostname. In the example above, this is <span class="inline-code"><pre><code>somehost-1234567</code></pre></span>. The hostname is a blend of your cluster's name along with a unique identifier. The hostname is immutable: once the cluster has been created, the hostname can not be changed. Keep this in mind when creating your cluster.
When asking questions via our support channels, you may provide the unique identifier portion of your hostname (e.g., <span class="inline-code"><pre><code>somehost-1234567</code></pre></span>) to help us cross-reference with your account and your cluster's logs.
All Bonsai clusters have a region slug included in the URL. Some example slugs might be:
These slugs are based on the AWS Regions and AZs and Google Cloud Regions and Zones, and indicate where on the planet your cluster is running. Ideally, this will be as close as possible to your application servers, to minimize latency.
All Bonsai clusters can be accessed via either of two domains: bonsai.io or bonsaisearch.net. In other words, both of these URLs will work:
<div class="code-snippet-container">
<a fs-copyclip-element="click-3" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-3" class="hljs language-javascript">https://a1b2c3d4e:5f6g7h8i9@somehost-1234567.region-x-y.bonsaisearch.net
https://a1b2c3d4e:5f6g7h8i9@somehost-1234567.region-x-y.bonsai.io
</code></pre>
</div>
</div>
This is a failover option to accommodate for the rare, but not unheard of, TLD outage. If there is an outage impacting .net or .io domains, users can point their application to whichever TLD is operational.
One way to connect to Elasticsearch is with a command line tool called `curl`. Curl is easy to use and comes preinstalled on many operating systems (and is widely available for download and installation). Curl can send and receive data from your Bonsai Elasticsearch cluster like so:
<div class="code-snippet-container">
<a fs-copyclip-element="click-4" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-4" class="hljs language-javascript">curl https://a1b2c3d4e:5f6g7h8i9@somehost-1234567.region-x-y.bonsaisearch.net
{
"name" : "PvRcoFq",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "DNlbVYS0TIGYwbQ6CUNwTw",
"version" : {
"number" : "5.4.3",
"build_hash" : "eed30a8",
"build_date" : "2017-06-22T00:34:03.743Z",
"build_snapshot" : false,
"lucene_version" : "6.5.1"
},
"tagline" : "You Know, for Search"
}
</code></pre>
</div>
</div>
Curl can be used to create and remove indices, add data, check on cluster state and more:
<div class="code-snippet-container">
<a fs-copyclip-element="click-5" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-5" class="hljs language-javascript"># Create an index called 'test'
curl -s -XPUT https://a1b2c3d4e:5f6g7h8i9@somehost-1234567.region-x-y.bonsaisearch.net/test {"acknowledged":true,"shards_acknowledged":true}
# Add a document to the 'test' index
curl -s -XPUT https://a1b2c3d4e:5f6g7h8i9@somehost-1234567.region-x-y.bonsaisearch.net/test/doc/1 -d '{"title":"test doc"}' {"_index":"test","_type":"doc","_id":"1","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"created":true}
# Delete the 'test' index
curl -s -XDELETE https://a1b2c3d4e:5f6g7h8i9@somehost-1234567.region-x-y.bonsaisearch.net/test {"acknowledged":true}
</code></pre>
</div>
</div>
Another way to connect to Elasticsearch is via your browser of choice. Simply copy the full URL from the Bonsai dashboard, and paste it into the browser's address bar. It should look something like this:
Some browsers, as a security feature, will strip the authentication credentials from the URL to prevent them from being displayed in the address bar. The credentials are stored in a local session. However, if you subsequently copy/paste the URL from the address bar, it will drop the credentials. There are some other conditions that can cause the credentials to be lost from the active window.
If you're using a browser to interact with Elasticsearch and you are seeing HTTP 401 errors, try to copy/paste the credentials back into the address bar.
This method is good for reading the cluster state, but not so great for creating indices, updating data, etc.
The Bonsai dashboard offers an interactive console which allows users to engage with their cluster. This console simplifies the process of communicating with the cluster, and it obviates the need for dealing with credentials altogether
It should be noted that Bonsai offers Private Spaces and when a space is private the Interactive Console is not accessible.
Creating an index on Elasticsearch is the first step towards leveraging the awesome power of Elasticsearch. While there is a wealth of resources online for creating an index on Elasticsearch, if you’re new to it, make sure to check out the definition of an index in our Elasticsearch core concepts.
If you’re already familiar with the basics, we have a blog post / white paper on The Ideal Elasticsearch Index, which has a ton of information and things to think about when creating an index.
Note that many Elasticsearch clients will take care of creating an index for you. You should review your client’s documentation for more information on its index usage conventions. If you don’t know how many indexes your application needs, we recommend creating one index per model or database table.
By default, Elasticsearch has a feature that will automatically create indices. Simply pushing data into a non-existing index will cause that index to be created with mappings inferred from the data. In accordance with Elasticsearch best practices for production applications, we’ve disabled this feature on Bonsai.
However, some popular tools such as Kibana and Logstash do not support explicit index creation, and rely on auto-creation being available. To accommodate these tools, we’ve whitelisted popular time-series index names such as <span class="inline-code"><pre><code>logstash*</code></pre></span>, <span class="inline-code"><pre><code>requests*</code></pre></span>, <span class="inline-code"><pre><code>events*</code></pre></span>, <span class="inline-code"><pre><code>.kibana*</code></pre></span> and <span class="inline-code"><pre><code>kibana-int*</code></pre></span>.
For the purposes of this discussion, we’ll assume that you don’t have an Elasticsearch client that can create the index for you. This guide will proceed with the manual steps for creating an index, changing settings, populating it with data, and finally destroying your index.
There are two main ways to manually create an index in your Bonsai cluster. The first is with a command line tool like <span class="inline-code"><pre><code>curl</code></pre></span> or <span class="inline-code"><pre><code>httpie</code></pre></span>. Curl is a standard tool that is bundled with many *nix-like operating systems. OSX and many Linux distributions should have it. It can even be installed on Windows. If you do not have curl, and don’t have a package manager capable of installing it, you can download it here.
The second is through the Interactive Console. The Interactive Console is a feature provided by Bonsai and found in your cluster dashboard.
Let’s create an example index called <span class="inline-code"><pre><code>acme-production</code></pre></span> from the command line with curl.
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">$ curl -X PUT http://user:password@redwood-12345.us-east-1.bonsai.io/acme-production
{"acknowledged":true}</code></pre>
</div>
</div>
All Bonsai clusters have a randomly generated username and password. By default, these credentials need to be included with all requests in order to be processed. If you’re seeing something an HTTP 401 Error like this:
<div class="code-snippet-container">
<a fs-copyclip-element="click-3" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-3" class="hljs language-javascript">HTTP 401: Authorization required</code></pre>
</div>
</div>
then your credentials were not supplied. You can view the fully-qualified URL in your cluster dashboard. It will look like this: <span class="inline-code"><pre><code>http://user:password@redwood-12345.us-east-1.bonsai.io</code></pre></span>
We can inspect the index settings with a GET call to <span class="inline-code"><pre><code>/_cat/indices</code></pre></span> like so:
<div class="code-snippet-container">
<a fs-copyclip-element="click-4" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-4" class="hljs language-javascript">$ curl -XGET http://user:password@redwood-12345.us-east-1.bonsai.io/_cat/indices?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open acme-production 1 1 0 0 260b 130b</code></pre>
</div>
</div>
The <span class="inline-code"><pre><code>?v</code></pre></span> at the end of the URL tells Elasticsearch to return the headers of the data it’s returning. It’s not required, but it helps explain the data.
In the example above, Elasticsearch shows that the <span class="inline-code"><pre><code>acme-production</code></pre></span> index was created with one primary shard and one replica shard. It doesn’t have any documents yet, and is only a few bytes in size.
Let’s add a replica to the index:
<div class="code-snippet-container">
<a fs-copyclip-element="click-5" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-5" class="hljs language-javascript">$ curl -XPUT http://user:password@redwood-12345.us-east-1.bonsai.io/acme-production/_settings -d '{"index":{"number_of_replicas":2}}'
{"acknowledged":true}</code></pre>
</div>
</div>
Now, when we re-query the <span class="inline-code"><pre><code>/_cat/indices</code></pre></span> endpoint, we can see that there are now two replicas, where before there was only one:
<div class="code-snippet-container">
<a fs-copyclip-element="click-6" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-6" class="hljs language-javascript">$ curl -XGET http://user:password@redwood-12345.us-east-1.bonsai.io/_cat/indices?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open acme-production 1 2 0 0 390b 130b</code></pre>
</div>
</div>
Similarly, if we wanted to remove all the replicas, we could simply modify the JSON payload like so:
<div class="code-snippet-container">
<a fs-copyclip-element="click-7" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-7" class="hljs language-javascript">$ curl -XPUT http://user:password@redwood-12345.us-east-1.bonsai.io/acme-production/_settings -d '{"index":{"number_of_replicas":0}}'
{"acknowledged":true}
$ curl -XGET http://user:password@redwood-12345.us-east-1.bonsai.io/_cat/indices?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open acme-production 1 0 0 0 318b 159b</code></pre>
</div>
</div>
Let’s insert a “Hello, world” test document to verify that your new index is available, and to highlight some basic Elasticsearch concepts.
Every document prior to Elasticsearch 7.x should specify a <span class="inline-code"><pre><code>type</code></pre></span>, and preferably an <span class="inline-code"><pre><code>id</code></pre></span>. You may specify these values with the <span class="inline-code"><pre><code>_id</code></pre></span>_id and the <span class="inline-code"><pre><code>_type</code></pre></span> keys, or Elasticsearch will infer them from the URL. If you don’t explicitly provide an id, Elasticsearch will create a random one for you.
In the following example, we use POST to add a simple document to the index which specifies a <span class="inline-code"><pre><code>_type</code></pre></span>_type of <span class="inline-code"><pre><code>test</code></pre></span> and an <span class="inline-code"><pre><code>_id</code></pre></span>_id of 1. You should replace the sample URL in this document with your own index URL to follow along:
<div class="code-snippet-container">
<a fs-copyclip-element="click-8" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-8" class="hljs language-javascript">$ curl -XPOST http://user:password@redwood-12345/acme-production/test/1 -d '{"title":"Hello world"}'
{
"_index" : "acme-production",
"_type" : "test",
"_id" : "1",
"_version" : 1,
"_shards" : {
"total" : 3,
"successful" : 3,
"failed" : 0
},
"created" : true
}</code></pre>
</div>
</div>
Because we haven’t explicitly defined a mapping (schema) for the acme-production index, Elasticsearch will come up with one for us. It will inspect the contents of each field it receives and attempt to infer a data structure for the content. It will then use that for subsequent documents.
Elasticsearch’s ability to generate mappings on the fly is a really nice feature, but it has some drawbacks. One is that the contents of the first field it sees determines how it will interpret the rest.
For example, there have been cases where users attempt to index geospatial data, and Elasticsearch interprets the field as being a float type. Certain documents then fail later in the indexing process with an HTTP 400 error. Or everything succeeds, but geospatial filtering is broken.
It’s a best practice to explicitly create your mappings before indexing into a new index, if you’re planning to power a production application. Today, most clients and frameworks are pretty good about handling this automatically, but it’s a subtle “gotcha” that has made its way into the support queues from time to time.
We can see the mapping that Elasticsearch generated by using the <span class="inline-code"><pre><code>_mapping</code></pre></span> API:
<div class="code-snippet-container">
<a fs-copyclip-element="click-9" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-9" class="hljs language-javascript">$ curl -XGET http://user:password@redwood-12345/acme-production/_mapping
{"acme-production":{"mappings":{"test":{"properties":{"title":{"type":"string"}}}}}}</code></pre>
</div>
</div>
The inspection of the index mapping shows that Elasticsearch has generated a schema from our sample JSON, and that it has decided that documents in the “acme-production” index of type “test” will have a string body in the “title” field. This is reasonable, so we’ll leave it alone.
Next, you may view this document by accessing it directly. In the example below, note the ?pretty parameter at the end of the URL. This tells Elasticsearch to pretty print the results, making them more legible:
<div class="code-snippet-container">
<a fs-copyclip-element="click-10" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-10" class="hljs language-javascript">$curl -XGET 'http://user:password@redwood-12345/acme-production/test/1?pretty'
{
"_index" : "acme-production",
"_type" : "test",
"_id" : "1",
"_version" : 1,
"found" : true,
"_source" : {
"title" : "Hello world"
}
}</code></pre>
</div>
</div>
Alternatively, you can see it in the search results with the <span class="inline-code"><pre><code>_search</code></pre></span> endpoint:
<div class="code-snippet-container">
<a fs-copyclip-element="click-11" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-11" class="hljs language-javascript">$curl -XGET 'http://user:password@redwood-12345/acme-production/_search?pretty'
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"hits" : {
"total" : 3,
"max_score" : 1.0,
"hits" : [ {
"_index" : "acme-production",
"_type" : "test",
"_id" : "1",
"_score" : 1.0,
"_source" : {
"title" : "Hello world"
}
}
}
}</code></pre>
</div>
</div>
Note the <span class="inline-code"><pre><code>_source</code></pre></span> key, which contains a copy of your original document. Elasticsearch makes an excellent general-purpose document store, although should never be used as a primary store. Use something ACID-compliant for that.
The _source field does also add some overhead. It can be disabled in the mappings. See the Elasticsearch documentation for more details.
To learn more about about the operations supported by your index, you should read the Elasticsearch Index API documentation. Note that some operations mentioned in the documentation (such as “Automatic Index Creation”) are restricted on Bonsai for technical reasons.
When you have decided you no longer need the “acme-production” index, you can destroy it with a one liner:
<div class="code-snippet-container">
<a fs-copyclip-element="click-12" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-12" class="hljs language-javascript">$ curl -XDELETE http://user:password@redwood-12345/acme-production
{"acknowledged":true}</code></pre>
</div>
</div>
The <span class="inline-code"><pre><code>DELETE</code></pre></span> verb will delete one or more indices in your cluster. If you have several indices to delete, you can still perform the action in one line by concatenating the indices with a comma, like so:
<div class="code-snippet-container">
<a fs-copyclip-element="click-13" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-13" class="hljs language-javascript">$ curl -XDELETE http://user:password@redwood-12345/acme-production-1,acme-production-2,acme-production-3</code></pre>
</div>
</div>
Destroying an index can not be undone, unless you restore it from a snapshot (if one exists). Do not delete indices without fully understanding the consequences. If there is a chance that your cluster is supporting a production application, be very careful before taking this kind of action. Accidental deletes are a major reason Bonsai doesn’t support <span class="inline-code"><pre><code>_all</code></pre></span> or wildcard (*) destructive actions.
This document serves to highlight key breaking changes for our customers. It does not represent the full list of breaking changes. As always, we recommend you test your searches on the new version of Elasticsearch before moving forward.
There are no specific tricks for this upgrade process, outside of the General Solution listed above. We would simply recommend you follow our traditional process for testing your searches on a new version of Elasticsearch.
This document serves to highlight key breaking changes for our customers. It does not represent the full list of breaking changes. As always, we recommend you test your searches on the new version of Elasticsearch before moving forward.
Elasticsearch 5 brings with it the removal of the <span class="inline-code"><pre><code>string</code></pre></span> type and has been replaced with <span class="inline-code"><pre><code>text</code></pre></span> and <span class="inline-code"><pre><code>keyword</code></pre></span>.
Analyze your mappings and look for <span class="inline-code"><pre><code>string</code></pre></span> mapping types. You will want to review how you use those fields to determine if you should use <span class="inline-code"><pre><code>text</code></pre></span> or <span class="inline-code"><pre><code>keyword</code></pre></span> going forward.
<span class="inline-code"><pre><code>text</code></pre></span>: you want full text search analysis on
<span class="inline-code"><pre><code>keyword</code></pre></span>: you want to search for the value exactly as is, a PO Number, or url for example
There are no specific tricks for this upgrade process, outside of the General Solution listed above. We would simply recommend you follow our traditional process for testing your searches on a new version of Elasticsearch.
There are no specific tricks for this upgrade process, outside of the General Solution listed above. We would simply recommend you follow our traditional process for testing your searches on a new version of Elasticsearch.
This document serves to highlight key breaking changes for our customers. It does not represent the full list of breaking changes. As always, we recommend you test your searches on the new version of Elasticsearch before moving forward.
Elasticsearch 6 has a stand-out breaking change, which is the limitation of only one mapping type per index. This is due to the upcoming removal of mapping types. Before this version all versions of Elasticsearch supported the concept of multiple mapping types, which would let you store different "types" of data in one index. Elastic is removing this feature in ES 7 after announcing its deprecation in ES 6.
This has been the standard recommendation for the best performance in Elasticsearch since its inception. Rather than index multiple types of documents into a single index, simply use one index per document type. Many tools and frameworks supporting Elasticsearch will do this for you automatically.
For those times when it simply doesn't make since to have one document type per index, you can add a dedicated field to the document that declares a document's given type. Then when you query on that index you will simply pass in which type you want the query limited to as well.
After working through the General Solutions above, we recommend you do the following:
There are no specific tricks for this upgrade process, outside of the General Solution listed above. We would simply recommend you follow our traditional process for testing your searches on a new version of Elasticsearch. In short, this involves provisioning a new 6.x cluster and validating it in a development environment. Once it has been validated,
There are no specific tricks for this upgrade process, outside of the General Solution listed above. We would simply recommend you follow our traditional process for testing your searches on a new version of Elasticsearch.
This document serves to highlight key breaking changes for our customers. It does not represent the full list of breaking changes. As always, we recommend you test your searches on the new version of Elasticsearch before moving forward.
Elasticsearch 7 has a stand out breaking change which is the removal of mapping types. Before this version all versions of Elasticsearch supported the concept of mapping types, which would let you store different "types" of data in one index. Elastic is removing this feature after announcing its deprecation in ES 6.
This has been the standard recommendation for the best performance in Elasticsearch since its inception. Rather than index multiple types of documents into a single index, simply use one index per document type. Many tools and frameworks supporting Elasticsearch will do this for you automatically.
For those times when it simply doesn't make since to have one document type per index, you can add a dedicated field to the document that declares a document's given type. Then when you query on that index you will simply pass in which type you want the query limited to as well.
After working through the General Solutions above, we recommend you do the following:
After working through the General Solutions above, we recommend you do the following:
There are no specific tricks for this upgrade process, outside of the General Solution listed above. We would simply recommend you follow our traditional process for testing your searches on a new version of Elasticsearch.
There are no specific tricks for this upgrade process, outside of the General Solution listed above. We would simply recommend you follow our bonsai.io/docs/upgrading-major-versions your searches on a new version of Elasticsearch.
Bonsai has been designed from day one to be resilient against service disruptions resulting from any number of crises. This document lays out some important information about how we maintain our customers’ uptime during large-scale emergencies.
Do you have a Business Continuity Plan implemented?
Yes. In the face of a global event we have designed and built a platform which is highly available and resilient to outages. We also have a distributed, cross-trained and multidisciplinary team with a succession plan. Bonsai is designed to gracefully handle a loss of infrastructure, personnel, or both.
How does your platform prevent service interruption due to catastrophic events?
First and foremost, the vast majority of our platform is automated. We also use multiple systems to monitor our fleet in real time, and have a rotating team of engineers providing 24/7/365 coverage in the event of problems big and small.
Additionally, all Bonsai customers’ search resources are balanced across at least three data centers. In order for a cluster to suffer complete disruption, there would need to be simultaneous outages across an entire region. Even in that event, we take regular, encrypted offsite backups which can be used to restore a cluster to a new region if an entire region were to go offline.
How does your team handle catastrophic events, such as a natural disaster or global pandemic ?
One More Cloud has always been a remote-first company, and our team is distributed around the USA. We have the infrastructure and culture in place to facilitate productive work from anywhere on the planet. Further, we have ensured that our team is cross-trained in our operational tooling, so that the platform will continue to be managed even if multiple team members are unreachable due to some calamitous event.
Finally, we are constantly assessing economic changes and potential threats, and revising our company policies policies accordingly. This includes, but is not limited to: cancelling all non-essential travel, enforcing social distancing, staggering travel plans/accommodations, requiring updated training, and so on.
If you have any questions about any of this, please don’t hesitate to reach out to support@bonsai.io.
Note: The situation is still evolving, and we don’t have exact answers to every question our users have about how Elastic’s switch to the SSPL will affect Bonsai. This FAQ is a living document, and will be updated as new information and decisions are available for sharing.
On Thursday, January 14th, Elastic announced that Elasticsearch 7.11 and on will be dual-licensed under the Elastic proprietary license, and the Server Side Public License (SSPL). Formerly, Elasticsearch had been released under the Elastic and Apache 2.0 licenses. The switch to SSPL is meaningful for several reasons, and there is a lot of uncertainty about what it means for us and our customers. You can read Elastic's FAQ on the subject here.
Last updated: 20-Jan-2021
Bonsai customers have nothing to worry about. Bonsai has multiple ways of continuing to be an Elasticsearch host under the SSPL, and we're exploring which option makes the most sense for everyone involved. The switch to SSPL is noteworthy, and it's worth reading Elastic's posts outlining their reasoning and answering questions. But the short version is that Bonsai customers are not facing any impact.
Tl;dr - The SSPL is a software license that allows the software to be managed as a service by a vendor, but the vendor must publish their service's complete source code under the SSPL. Simply, a company like Bonsai that offers Elasticsearch-as-a-service must release all of the code we use to manage Elasticsearch clusters, if we want to remain compliant with the terms of the license.
Nothing. The SSPL does not prohibit Bonsai from continuing operations, nor does it prohibit us from supporting new versions of Elasticsearch as they become available. The responsibility is on our team to work out how to remain in compliance with the terms of the new license, and we are committed to doing so.
No. Bonsai has been operational for over a decade, we’re profitable, and we have always had a good relationship with Elastic. There are no plans to shut down, change to some Elasticsearch competitor, raise prices, version-pin, or anything else.
No. We are working out the details for our service’s continued growth and development alongside Elasticsearch.
Elastic has published a blog entry outlining why they're making this change. In short, it has to do with trademark disputes (among other things) with Amazon. Bonsai is not involved in those disputes, and we have always been conscientious about respecting Elastic's IP and trademarks in our branding and marketing.
Feel free to reach out to us if you have a question or concern not answered here. We will answer as best as we can, but please remember that this transition is expected to take time and be complex. Quick, accurate answers may not be immediately available.
In the Supreme Court case South Dakota v Wayfair, it was determined that U.S. states may charge tax on online purchases made from out-of-state sellers, even if that seller has no physical presence in the state. This has significantly impacted eCommerce companies and SaaS providers like Bonsai. If you sell online to various U.S. states, this impacts your company too.
Because of this ruling, Bonsai will now be charging sales tax to customers in a variety of states. Here’s what you should know:
As of the date when this article was published, 36 states and the District of Columbia have chosen to charge online sales tax. We anticipate that more states will soon begin to charge these taxes. To see up-to-date information on whether or not your state charges sales tax, please reference the Sales Tax Institute’s Nexus State Guide.
Our Hobby Tier will remain free. Standard, Business, and Enterprise plans will all be subject to sales tax.
Plans sold through Heroku, or other third party channels will be taxed through those providers. Please follow up with your reseller for more information.
These taxes will be collected starting in January 2020.
There is no indication that this will be necessary.
Follow our step by step guide for Account Settings Billing.
Log in and contact our Support Team.
Customers may downgrade a cluster's plan or destroy it at any time to receive a prorated service credit. This credit will be automatically applied to the next billing cycle.
For example, suppose a customer creates a cluster on a $50/month plan, then downgrades to a $20/month plan after 14 days. The customer would receive a $25 service credit for the unused time at the $50/mo rate, less $10 for the remainder of the billing period at the $20/mo rate. This leaves the customer with a credit of $15.
At the end of the first billing period, they would be charged $5 for the next month, as the $15 credit would be deducted from their normal rate of $20/mo. In the following month, the customer would pay the normal rate of $20.
Credits are not issued for service and performance related reasons on our free, Staging and Standard subscriptions. However, there are special SLAs available to Business and Enterprise customers, wherein service disruptions may result in a service credit according to a predetermined schedule negotiated directly with our sales and legal teams prior to using Bonsai.
Per our Terms of Service, One More Cloud, Inc. is generally not required to refund a subscription fee under any circumstances. One More Cloud is also not liable for damages incurred through use of the service.
Customers with extenuating circumstances should reach out to support@bonsai.io and describe their situation for consideration.
If you would like to request a SOC 2 Type 2 Report for SOC 2 compliance, please email us at support@bonsai.io.
Once we recieve your request, a PDF of the report will be emailed.
Any Bonsai employee who suspects that a data breach has occurred must immediately notify the CTO and CEO. The CTO will form a response team to handle the incident.
The Bonsai response team will investigate all reported data breaches to confirm if a data exfiltration has occurred. Once confirmed, the response team will perform the following steps:
To report a data breach or suspicious activity, email us at support@bonsai.io with a description of events and observations.
The General Data Protection Regulation (GDPR) is a legal framework which went into effect on May 25, 2018. It is designed to give EU citizens more control over their personal data. The GDPR is regulates how internet companies track and store customer information. Bonsai is compliant with the GDPR for all customers, regardless of whether or not they are citizens of the EU.
Bonsai has never sold email addresses or private information, does not track users' activity across the web, and does not otherwise spy on users. However, in order to be fully compliant and extend the privacy protections of GDPR to customers around the globe, Bonsai took the following measures prior to the GDPR's effective date:
The GDPR requires internet companies to outline their privacy policies for their customers in plain English. Bonsai's privacy policy can be read in full here: https://bonsai.io/privacy. A few highlights that pertain to the GDPR are:
In preparation for the GDPR, Bonsai published a blog entry about what the company is doing to meet these regulations. Bonsai's current privacy policy can be read here. As always, we’re here to address any questions you may have, so please email us at support@bonsai.io if you need any clarification.
If you would like to request a Data Protection Agreement (DPA) for GDPR compliance, please email us at support@bonsai.io with the full name and email address of the person who will be counter-signing the document.
Once we receive your request, a DPA will be emailed via HelloSign for review and signature.
If you would like to request a Business Associate's Agreement (BAA) for HIPAA compliance, please email us at support@bonsai.io.
Once we receive your request, our team will follow up to review your Bonsai usage and verify appropriate credentials. Upon confirming the necessary details, a BAA will be emailed via HelloSign for review and signature.
An HTTP 404: Not Found error is returned by the API when a requested resource can not be found. This can happen for one of several reasons:
An HTTP 404: Not Found error may look something like this:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">{ "errors": [ "Cluster doesnotexist-1234 not found.", "Please review the documentation available at https://docs.bonsai.io", "Undefined request." ], "status": 404}</code></pre>
</div>
</div>
The <span class="inline-code"><pre><code>"status": 404</code></pre></span> key confirms the error is an HTTP 404: Not Found error.
The first thing to look at when troubleshooting an HTTP 404: Not Found error is the error message array which is returned from the API. Also make sure to check for any typos that may have ended up in the request.
If you're still unsure why you are receiving the error, then please shoot us an email.
An HTTP 403: Forbidden error can occur for one of several reasons. Generally, it communicates that the server understood the request, but is refusing to authorize it. This is distinct from an authentication error ( HTTP 401), in that the authentication credentials are correct, but there is some other reason the request is not authorized.
Some examples of situations where a user might see an HTTP 403: Forbidden response from the API:
A call to the API which results in an HTTP 403: Forbidden response may look something like this:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">{ "errors": [ "Your API access has been suspended due to a Terms of Service violation. Please contact support@bonsai.io." ], "status": 403}</code></pre>
</div>
</div>
The <span class="inline-code"><pre><code>"status": 403</code></pre></span> key indicates that the error is indeed an HTTP 403: Forbidden error.
The first step for troubleshooting the error is to examine the error messages in detail. If there is a problem that merits contacting support, then you will want to reach out to us for further discussion. Also check your email inbox and spam folders for anything that we may have already sent.
If the error indicates a temporary interruption, such as maintenance mode, then check out our Twitter account for updates, or shoot us an email.
An HTTP 402: Payment Required error occurs when your account is past due and you try to make a request to the API. Bonsai only provides API access to accounts which are up to date on payments.
If you are receiving an HTTP 402: Payment Required error, then there is a balance due on your account. You can update your billing information in your account profile. If you run into any issues, there is documentation available here.
A call to the API which results in an HTTP 402: Payment Required error may look something like this:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet"><pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">{
"errors": [
"Your account has been suspended due to non-payment. Please update your billing information or contact support@bonsai.io."
],
"status": 402
}</code></pre></div></div>
The <span class="inline-code"><pre><code>"status": 402</code></pre></span> key indicates the HTTP 402: Payment Required error.
The first thing to do is navigate to your billing profile and make sure your account is up to date. You can update your credit card if needed, and review any recent invoices. Additionally, you should check your inbox and spam folders for any billing-related notices from Bonsai.
If everything seems correct with your account, you can always contact support and we will be glad to assist.
An HTTP 422: Unprocessable Entity error occurs when a request to the API can not be processed. This is a client-side error, meaning the problem is with the request itself, and not the API.
If you are receiving an HTTP 422: Unprocessable Entity error, there are several possibilities for why it might be occurring:
A call to the API that results in an HTTP 422: Unprocessable Entity error may look something like this:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a><div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">{
"errors": [
"The Content-Type header specifies application/xml.",
"The Accept header specifies a response format other than application/json.",
"Your request could not be processed. "
],
"status": 422
}</code></pre>
</div>
</div>
The <span class="inline-code"><pre><code>"status": 422</code></pre></span> key indicates the HTTP 422: Unprocessable Entity error.
The first step in troubleshooting this error is to carefully inspect the response from the API. It will often provide valuable information about what went wrong with the request. For example, if there was a problem creating a cluster because a plan slug was not recognized, you might see something like this:
<div class="code-snippet-container">
<a fs-copyclip-element="click-3" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a><div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-3" class="hljs language-javascript">{
"errors": [
"Plan 'sandboxing' not found.",
"Please use the /plans endpoint for a list of available plans.",
"Your request could not be processed. "
],
"status": 422
}</code></pre>
</div>
</div>
If all else fails, you can always contact support and we will be glad to assist.
An HTTP 429: Too Many Requests error occurs when an API token is used to make too many requests to the API in a given period. Bonsai throttles the number of API calls that can be made by any given token in order to maintain a high level of service and prevent DoS scenarios.
If you are receiving an HTTP 429: Too Many Requests error, then you are hitting the API too frequently. The API documentation introduction describes which limits are in place.
A call to the API that results in an HTTP 429: Too Many Requests error may look something like this:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">{
"errors": [
"You are making too many requests to the API.",
"Please read the documentation at https://docs.bonsai.io/article/308-api-error-429-too-many-requests"
],
"status": 429
}</code></pre>
</div>
</div>
The first step is to look at how often you are polling the API. If you're checking the API while waiting for a cluster to provision or update, then every 3-5 seconds should be adequate.
If you have an adequate amount of sleep time in between API calls, then the next thing to check would be whether you have multiple jobs checking the API using the same token. If you're using some kind of CI system to spin up and tear down clusters during testing and continuous integration, then and they all share the same token, then they're likely interfering with each other.
Note that when the API returns an HTTP 429: Too Many Requests error, it will include a header called <span class="inline-code"><pre><code>Retry-After</code></pre></span>, which indicates how many seconds you will need to wait to make your next request. So you could add in a check for this into your scripts. That way an HTTP 429 does not cause things to fail, but rather informs the necessary sleep duration in order to proceed.
If all else fails, you can always contact support and we will be glad to assist.
An HTTP 401: Unauthorized error occurs when a request to the API could not be authenticated. All requests to API resources must use some authentication scheme to prove access rights to the resource.
If you are receiving an HTTP 401: Unauthorized error, there are several possibilities for why it might be occurring:
Check that the authentication credentials you are passing along in the request are correct and belong to an active token.
A call to the API that results in an HTTP 401: Unauthorized error may look something like this:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">{
"errors": [
"Authentication failed.",
"Could not authenticate your request.",
"This request has failed authentication. Please read the docs or email us at support@bonsai.io."
],
"status": 401
}</code></pre>
</div>
</div>
The <span class="inline-code"><pre><code>"status": 401</code></pre></span> key indicates the HTTP 401: Unauthorized error.
The first thing to do is to carefully read the list of errors returned by the API. This will often include some hints about what is happening:
<div class="code-snippet-container">
<a fs-copyclip-element="click-3" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-3" class="hljs language-javascript">{
"errors": [
"The 'Authorization' header has no value for the password field.",
"The API token is missing, inactive or does not exist.",
"Authentication failed.",
"Could not authenticate your request.",
"This request has failed authentication. Please read the docs or email us at support@bonsai.io."
],
"status": 401
}</code></pre>
</div>
</div>
If that doesn't help, then check is that the credentials you're sending are correct. You can view the credentials in your account dashboard and cross-reference this with the credentials you're passing to the API.
If you're sure that the credentials are correct, then you may want to try isolating the problem. Try making a curl call to the API and see what happens. For example, using Basic Auth:
<div class="code-snippet-container">
<a fs-copyclip-element="click-4" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-4" class="hljs language-javascript">curl -s -vvv -XGET https://user1234:somereallylongpassword@api.bonsai.io/clusters/
{
"clusters": [],
"status": 200
}</code></pre>
</div>
</div>
If the request succeeds, then you have eliminated the API Token as the source of the problem, and it's likely an issue with how the application is making the call to the API.
If the request still fails, then you should consult the documentation for the authentication scheme you're using to determine which HTTP headers are needed in the request. You can also use the -vvv flag in curl to see which headers and values are being passed with the request.
If all else fails, you can always contact support and we will be glad to assist.
The Bonsai API is currently in its Alpha release phase. It may not be feature-complete, and is subject to change without notice. If you have any questions about the roadmap of the API, please reach out to support.
The Bonsai API supports HTTP Basic Authentication over TLS >= 1.2 as one means for authenticating requests. The authentication protocol utilizes an Authorization header, with the contents: Basic .
Many tools, such as curl will construct this header automatically from credentials in a URL. For example, curl will translate https://user:pass@api.bonsai.io/ in to https://api.bonsai.io/ with the header Authorization: Basic dXNlcjpwYXNz.
For Basic Auth, the token key corresponds to the“user” parameter and the token secret corresponds to the“password” parameter.
The Bonsai API is currently in its Alpha release phase. It may not be feature-complete, and is subject to change without notice. If you have any questions about the roadmap of the API, please reach out to support.
The Bonsai API supports two methods of authenticating requests: HTTP Basic Auth, and HMAC. The former is a widely-adopted standard supported by most HTTP clients, but requires an encrypted connection for safe transmission. The latter is an older and slight more complicated method, but offers some security over unencrypted connections.
If you’re connecting to the API via https (as most people are), then Basic Auth is fine. The header containing the credentials is encrypted using industry-standard protocols before being sent over the Internet. TLS allows you to authenticate the receiving party(the API) using a trusted certificate authority, rendering MITM attacks highly unlikely. Basic Auth is not secure over unencrupted connections, however. Your credentials could be leaked and read by a third party.
If you can’t use https for some reason, then consider using HMAC. This protocol involves passing along some special headers with your API requests, with the expectation that a 3rd party can access the transmission. It’s slightly more complicated to configure, but it involves a private key-signed time-based nonce, mitigating against MITM and replay attacks. A third party could see the data you send/receive with the API, but would not be able to steal your API credentials and interact with the API on your behalf.
Requests that do not have the proper authentication will receive an HTTP 401: Not Authorized response. This can happen for a variety of reasons, including(but not limited too):
If you are having trouble authenticating your requests to the API, please reach out to support@bonsai.io.
The Bonsai API is currently in its Alpha release phase. It may not be feature-complete, and is subject to change without notice. If you have any questions about the roadmap of the API, please reach out to support.
The Bonsai API supports a hash-based message authentication code protocol for authenticating requests. This scheme allows the API to simultaneously verify both the integrity and the authenticity of a user’s request.
This authentication protocol requires that all API requests include three HTTP headers:
For example, in Ruby, the X-BonsaiApi-Auth header can be computed as: OpenSSL::HMAC.hexdigest('sha1', token_secret, "#{time}#{token_key}"), where the token_secret is the API key’s secret.
The Bonsai API is currently in its Alpha release phase. It may not be feature-complete, and is subject to change without notice. If you have any questions about the roadmap of the API, please reach out to support.
The Releases API provides users a method to explore the different versions of Elasticsearch available to their account. This API supports the following actions:
All calls to the Releases API must be authenticated with an active API token.
The Bonsai API provides a standard format for Release objects. A Release object includes:
<table>
<thead>
<tr>
<th>Attribute</th><th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>name</td><td>A String representing the name for the release.</td>
</tr>
<tr>
<td>slug</td><td>A String representing the machine-readable name for the deployment. </td>
</tr>
<tr>
<td>version</td><td>A String representing the version of the release.</td>
</tr>
<tr>
<td>multitenant</td><td>A Boolean representing whether or not the release is available on multitenant deployments.</td>
</tr>
</tbody>
</table>
The Bonsai API provides a method to get a list of all releases available to your account. An HTTP GET call is made to the <span class="inline-code"><pre><code>/releases</code></pre></span> endpoint, and Bonsai will return a JSON list of Release objects.
No parameters are supported for this action.
An HTTP GET call is made to <span class="inline-code"><pre><code>/releases</code></pre></span>.
Upon success, Bonsai responds with an <span class="inline-code"><pre><code>HTTP 200: OK</code></pre></span> code, along with a JSON list representing the releases available to your account:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">{
"releases": [
{
"name": "Elasticsearch 5.6.16",
"slug": "elasticsearch-5.6.16",
"service_type": "elasticsearch",
"version": "5.6.16",
"multitenant": true
},
{
"name": "Elasticsearch 6.5.4",
"slug": "elasticsearch-6.5.4",
"service_type": "elasticsearch",
"version": "6.5.4",
"multitenant": true
},
{
"name": "Elasticsearch 7.2.0",
"slug": "elasticsearch-7.2.0",
"service_type": "elasticsearch",
"version": "7.2.0",
"multitenant": true
}
]
}</code></pre>
</div>
</div>
The Bonsai API provides a method to get information about a single release available to your account.
No parameters are supported for this action.
An HTTP GET call is made to <span class="inline-code"><pre><code>/releases/[:slug]</code></pre></span>.
Upon success, Bonsai responds with an <span class="inline-code"><pre><code>HTTP 200: OK</code></pre></span> code, along with a JSON body representing the Release object:
<div class="code-snippet-container">
<a fs-copyclip-element="click-3" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-4" class="hljs language-javascript">{
"name": "Elasticsearch 7.2.0",
"slug": "elasticsearch-7.2.0",
"service_type": "elasticsearch",
"version": "7.2.0",
"multitenant": true
}</code></pre>
</div>
</div>
The Bonsai API is currently in its Alpha release phase. It may not be feature-complete, and is subject to change without notice. If you have any questions about the roadmap of the API, please reach out to support.
The Spaces API provides users a method to explore the server groups and geographic regions available to their account, where clusters may be provisioned. This API supports the following actions:
All calls to the Spaces API must be authenticated with an active API token.
The Bonsai API provides a standard format for Space objects. A Space object includes:
<table>
<thead>
<tr>
<th>Attribute</th><th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>path</td><td>A String representing a machine-readable name for the server group.</td>
</tr>
<tr>
<td>private_network</td><td>A Boolean indicating whether the space is isolated and inaccessible from the public Internet. A VPC connection will be needed to communicate with a private cluster.</td>
</tr>
<tr>
<td>cloud</td><td>An Object containing details about the cloud provider and region attributes:
</td>
</tr>
</tbody>
</table>
The Bonsai API provides a method to get a list of all available spaces on your account. An HTTP GET call is made to the <span class="inline-code"><pre><code>/spaces</code></pre></span> endpoint, and Bonsai will return a JSON list of Space objects.
No parameters are supported for this action.
An HTTP GET call is made to <span class="inline-code"><pre><code>/spaces</code></pre></span>.
Upon success, Bonsai responds with an <span class="inline-code"><pre><code>HTTP 200: OK</code></pre></span> code, along with a JSON list representing the spaces available to your account:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">{
"spaces": [
{
"path": "omc/bonsai/us-east-1/common",
"private_network": false,
"cloud": {
"provider": "aws",
"region": "aws-us-east-1"
}
},
{
"path": "omc/bonsai/eu-west-1/common",
"private_network": false,
"cloud": {
"provider": "aws",
"region": "aws-eu-west-1"
}
},
{
"path": "omc/bonsai/ap-southeast-2/common",
"private_network": false,
"cloud": {
"provider": "aws",
"region": "aws-ap-southeast-2"
}
}
]
}</code></pre>
</div>
</div>
The Bonsai API provides a method to get information about a single space available to your account.
No parameters are supported for this action.
An HTTP GET call is made to <span class="inline-code"><pre><code>/spaces/[:path]</code></pre></span>.
Upon success, Bonsai responds with an <span class="inline-code"><pre><code>HTTP 200: OK</code></pre></span> code, along with a JSON body representing the Space object:
<div class="code-snippet-container">
<a fs-copyclip-element="click-3" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-3" class="hljs language-javascript">{
"path": "omc/bonsai/us-east-1/common",
"private_network": false,
"cloud": {
"provider": "aws",
"region": "aws-us-east-1"
}
}</code></pre>
</div>
</div>
The Bonsai API is currently in its Alpha release phase. It may not be feature-complete, and is subject to change without notice. If you have any questions about the roadmap of the API, please reach out to support.
The Plans API gives users the ability to explore the different cluster subscription plans available to their account. This API supports the following actions:
All calls to the Plans API must be authenticated with an active API token.
The Bonsai API provides a standard format for Plan objects. A Plan object includes:
<table>
<thead>
<tr>
<th>Attribute</th><th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>slug</td><td>A String representing a machine-readable name for the plan. </td>
</tr>
<tr>
<td>name</td><td>A String representing the human-readable name of the plan.</td>
</tr>
<tr>
<td>price_in_cents</td><td>An Integer representing the plan price in cents.</td>
</tr>
<tr>
<td>billing_interval_in_months</td><td>An Integer representing the plan billing interval in months.</td>
</tr>
<tr>
<td>single_tenant</td><td>A Boolean indicating whether the plan is single-tenant or not. A value of false indicates the Cluster will share hardware with other Clusters. Single tenant environments can be reached via the public Internet. Additional documentation here.</td>
</tr>
<tr>
<td>private_network</td><td>A Boolean indicating whether the plan is on a publicly addressable network. Private plans provide environments that cannot be reached by the public Internet. A VPC connection will be needed to communicate with a private cluster.</td>
</tr>
<tr>
<td>available_releases</td><td>An Array with a collection of search release slugs available for the plan. Additional information about a release can be retrieved from the Releases API.</td>
</tr>
<tr>
<td>available_spaces</td><td>An Array with a collection of Space paths available for the plan. Additional information about a space can be retrieved from the Spaces API.</td>
</tr>
</tbody>
</table>
The Bonsai API provides a method to get a list of all plans available to your account. An HTTP GET call is made to the <span class="inline-code"><pre><code>/plans</code></pre></span> endpoint, and Bonsai will return a JSON list of Plan objects.
No parameters are supported for this action.
An HTTP GET call is made to <span class="inline-code"><pre><code>/plans</code></pre></span>.
Upon success, Bonsai responds with an <span class="inline-code"><pre><code>HTTP 200: OK</code></pre></span> code, along with a JSON list representing the Plans available to your account:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">{
"plans": [
{
"slug": "sandbox-aws-us-east-1",
"name": "Sandbox",
"price_in_cents": 0,
"billing_interval_in_months": 1,
"single_tenant": false,
"private_network": false,
"available_releases": [
"7.2.0"
],
"available_spaces": [
"omc/bonsai-gcp/us-east4/common",
"omc/bonsai/ap-northeast-1/common",
"omc/bonsai/ap-southeast-2/common",
"omc/bonsai/eu-central-1/common",
"omc/bonsai/eu-west-1/common",
"omc/bonsai/us-east-1/common",
"omc/bonsai/us-west-2/common"
]
},
{
"slug": "standard-sm",
"name": "Standard Small",
"price_in_cents": 5000,
"billing_interval_in_months": 1,
"single_tenant": false,
"private_network": false,
"available_releases": [
"elasticsearch-5.6.16",
"elasticsearch-6.8.3",
"elasticsearch-7.2.0"
],
"available_spaces": [
"omc/bonsai/ap-northeast-1/common",
"omc/bonsai/ap-southeast-2/common",
"omc/bonsai/eu-central-1/common",
"omc/bonsai/eu-west-1/common",
"omc/bonsai/us-east-1/common",
"omc/bonsai/us-west-2/common"
]
}
]
}</code></pre>
</div>
</div>
The Bonsai API provides a method to retrieve information about a single Plan available to your account.
No parameters are supported for this action.
An HTTP GET call is made to <span class="inline-code"><pre><code>/plans/[:plan-slug]</code></pre></span>.
Upon success, Bonsai will respond with an <span class="inline-code"><pre><code>HTTP 200: OK</code></pre></span> code, along with a JSON body representing the Plan object:
<div class="code-snippet-container">
<a fs-copyclip-element="click-3" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-3" class="hljs language-javascript">{
"slug": "sandbox-aws-us-east-1",
"name": "Sandbox",
"price_in_cents": 0,
"billing_interval_in_months": 1,
"single_tenant": false,
"private_network": false,
"available_releases": [
"elasticsearch-7.2.0"
],
"available_spaces": [
"omc/bonsai-gcp/us-east4/common",
"omc/bonsai/ap-northeast-1/common",
"omc/bonsai/ap-southeast-2/common",
"omc/bonsai/eu-central-1/common",
"omc/bonsai/eu-west-1/common",
"omc/bonsai/us-east-1/common",
"omc/bonsai/us-west-2/common"
]
}
</code></pre>
</div>
</div>
The Bonsai API is currently in its Alpha release phase. It may not be feature-complete, and is subject to change without notice. If you have any questions about the roadmap of the API, please reach out to support.
The Clusters API provides a means of managing clusters on your account. This API supports the following actions:
All calls to the Clusters API must be authenticated with an active API token.
<span id="bonsai-cluster-object"></span>
The Bonsai API provides a standard format for Cluster objects. A Cluster object includes:
<table>
<thead>
<tr>
<th>Attribute</th><th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>slug</td><td>A string representing a unique, machine-readable name for the cluster. A cluster slug is based its name at creation, to which a random integer is concatenated.</td>
</tr>
<tr>
<td>name</td><td>A string representing the human-readable name of the cluster.</td>
</tr>
<tr>
<td>uri</td><td>A URI to get more information about this cluster.</td>
</tr>
<tr>
<td>plan</td><td>An Object with some information about the cluster's current subscription plan. This hash has two keys:
You can see more details about the plan by passing the slug to the Plans API.
</td>
</tr>
<tr>
<td>release</td><td>An Object with some information about the cluster's current release. This hash has five keys:
You can see more details about the release by passing the slug to the Releases API.
</td>
</tr>
<tr>
<td>space</td><td>An Object with some information about where the cluster is running. This has three keys:
You can see more details about the space by passing the path to the Spaces API.
</td>
</tr>
<tr>
<td>stats</td><td>An Object with a collection of statistics about the cluster. This hash has four keys:
This attribute should not be used for real-time monitoring! Stats are updated every 10-15 minutes. To monitor real-time metrics, monitor your cluster directly, via the Index Stats API.
</td>
</tr>
<tr>
<td>access</td><td>An Object containing information about connecting to the cluster. This hash has several keys:
</td>
</tr>
<tr>
<td>state</td><td>A String representing the current state of the cluster. This indicates what the cluster is doing at any given moment. There are 8 defined states:
</td>
</tr>
</tbody>
</table>
<span id="view-all-clusters"></span>
The Bonsai API provides a method to get a list of all active clusters on your account. An HTTP GET call is made to the <span class="inline-code"><pre><code>/clusters</code></pre></span> endpoint, and Bonsai will return a JSON list of Cluster objects. This API call will not return deprovisioned clusters. This call uses pagination, so you may need to make multiple requests to fetch all clusters.
<table>
<thead>
<tr>
<th>Param</th><th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>q</td><td>Optional. A query string for filtering matching clusters. This currently works on name</td>
</tr>
<tr>
<td>tenancy</td><td>Optional. A string which will constrain results to parent or child cluster. Valid values are: parent, child</td>
</tr>
<tr>
<td>location</td><td>Optional. A string representing the account, region, space, or cluster path where the cluster is located. You can get a list of available spaces with the Spaces API. Space path prefixes work here, so you can find all clusters in a given region for a given cloud.</td>
</tr>
</tbody>
</table>
An HTTP GET call is made to <span class="inline-code"><pre><code>/clusters</code></pre></span>.
Upon success, Bonsai responds with an HTTP 200: OK code, along with a JSON list representing the clusters on your account:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">{
"pagination": {
"page_number": 1,
"page_size": 2,
"total_records": 2
},
"clusters": [
{
"slug": "first-testing-cluste-1234567890",
"name": "first_testing_cluster",
"uri": "https://api.bonsai.io/clusters/first-testing-cluste-1234567890",
"plan": {
"slug": "sandbox-aws-us-east-1",
"uri": "https://api.bonsai.io/plans/sandbox-aws-us-east-1"
},
"release": {
"version": "7.2.0",
"slug": "elasticsearch-7.2.0",
"package_name": "7.2.0",
"service_type": "elasticsearch",
"uri": "https://api.bonsai.io/releases/elasticsearch-7.2.0"
},
"space": {
"path": "omc/bonsai/us-east-1/common",
"region": "aws-us-east-1",
"uri": "https://api.bonsai.io/spaces/omc/bonsai/us-east-1/common"
},
"stats": {
"docs": 0,
"shards_used": 0,
"data_bytes_used": 0
},
"access": {
"host": "first-testing-cluste-1234567890.us-east-1.bonsaisearch.net",
"port": 443,
"scheme": "https"
},
"state": "PROVISIONED"
},
{
"slug": "second-testing-clust-1234567890",
"name": "second_testing_cluster",
"uri": "https://api.bonsai.io/clusters/second-testing-clust-1234567890",
"plan": {
"slug": "sandbox-aws-us-east-1",
"uri": "https://api.bonsai.io/plans/sandbox-aws-us-east-1"
},
"release": {
"version": "7.2.0",
"slug": "elasticsearch-7.2.0",
"package_name": "7.2.0",
"service_type": "elasticsearch",
"uri": "https://api.bonsai.io/releases/elasticsearch-7.2.0"
},
"space": {
"path": "omc/bonsai/us-east-1/common",
"region": "aws-us-east-1",
"uri": "https://api.bonsai.io/spaces/omc/bonsai/us-east-1/common"
},
"stats": {
"docs": 0,
"shards_used": 0,
"data_bytes_used": 0
},
"access": {
"host": "second-testing-clust-1234567890.us-east-1.bonsaisearch.net",
"port": 443,
"scheme": "https"
},
"state": "PROVISIONED"
}
]
}</code></pre>
</div>
</div>
<span id="view-single-cluster"></span>
The Bonsai API provides a method to retrieve information about a single cluster on your account.
No parameters are supported for this action.
An HTTP GET call is made to <span class="inline-code"><pre><code>/clusters/[:slug]</code></pre></span>.
Upon success, Bonsai will respond with an <span class="inline-code"><pre><code>HTTP 200: OK</code></pre></span> code, along with a JSON body representing the Cluster object:
<div class="code-snippet-container">
<a fs-copyclip-element="click-3" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-3" class="hljs language-javascript">{
"cluster": {
"slug": "second-testing-clust-1234567890",
"name": "second_testing_cluster",
"uri": "https://api.bonsai.io/clusters/second-testing-clust-1234567890",
"plan": {
"slug": "sandbox-aws-us-east-1",
"uri": "https://api.bonsai.io/plans/sandbox-aws-us-east-1"
},
"release": {
"version": "7.2.0",
"slug": "elasticsearch-7.2.0",
"package_name": "7.2.0",
"service_type": "elasticsearch",
"uri": "https://api.bonsai.io/releases/elasticsearch-7.2.0"
},
"space": {
"path": "omc/bonsai/us-east-1/common",
"region": "aws-us-east-1",
"uri": "https://api.bonsai.io/spaces/omc/bonsai/us-east-1/common"
},
"stats": {
"docs": 0,
"shards_used": 0,
"data_bytes_used": 0
},
"access": {
"host": "second-testing-clust-1234567890.us-east-1.bonsaisearch.net",
"port": 443,
"scheme": "https"
},
"state": "PROVISIONED"
}
}</code></pre>
</div>
</div>
<span id="create-new-cluster"></span>
The Bonsai API provides a method to create new clusters on your account. An HTTP POST call is made to the <span class="inline-code"><pre><code>/clusters</code></pre></span> endpoint, and Bonsai will create the cluster.
<table>
<thead>
<tr>
<th>Param</th><th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>name</td><td>Required. A String representing the name for your new cluster.</td>
</tr>
<tr>
<td>plan</td><td>A String representing the slug of the new plan for your cluster. You can get a list of available plans via the Plans API.</td>
</tr>
<tr>
<td>space</td><td>A String representing the Space slug where the new cluster will be created. You can get a list of available spaces with the Spaces API.</td>
</tr>
<tr>
<td>release</td><td>A String representing the search service release to use. You can get a list of available versions with the Releases API.</td>
</tr>
</tbody>
</table>
An HTTP POST call is made to /clusters along with a JSON payload of the supported parameters.
Bonsai will respond with an <span class="inline-code"><pre><code>HTTP 202: Accepted</code></pre></span> code, along with a short message and details about the cluster that was created:
<div class="code-snippet-container">
<a fs-copyclip-element="click-4" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-4" class="hljs language-javascript">{
"message": "Your cluster is being provisioned.",
"monitor": "https://api.bonsai.io/clusters/test-5-x-3968320296",
"access": {
"user": "utji08pwu6",
"pass": "18v1fbey2y",
"host": "test-5-x-3968320296",
"port": 443,
"scheme": "https",
"url": "https://utji08pwu6:18v1fbey2y@test-5-x-3968320296.us-east-1.bonsaisearch.net:443"
},
"status": 202
}</code></pre>
</div>
</div>
An <span class="inline-code"><pre><code>HTTP 422: Unprocessable Entity</code></pre></span> error may arise if you are trying to create one too many Sandbox clusters on your account:
<div class="code-snippet-container">
<a fs-copyclip-element="click-5" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-5" class="hljs language-javascript">{
"errors": [
"The requested plan is not available for provisioning. Solution: Please use the plans endpoint for a list of available plans.",
"Your request could not be processed. "
],
"status":422
}</code></pre>
</div>
</div>
If you are not creating a Sandbox cluster, please refer to the API Error 422: Unprocessable Entity documentation.
<span id="update-cluster"></span>
The Bonsai API provides a method to update the name or plan of your cluster. An HTTP PUT call is made to the <span class="inline-code"><pre><code>/clusters</code></pre></span> endpoint, and Bonsai will update the cluster.
<table>
<thead>
<tr>
<th>Param</th><th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>name</td><td>A String representing the new name for your cluster. Changing the cluster name will not change its URL.</td>
</tr>
<tr>
<td>plan</td><td>A String representing the slug of the new plan for your cluster. Updating the plan may trigger a data migration. You can get a list of available plans via the Plans API.</td>
</tr>
</tbody>
</table>
To make a change to an existing cluster, make an HTTP PUT call to <span class="inline-code"><pre><code>/clusters/[:slug]</code></pre></span> with a JSON body for one or more of the supported params.
Bonsai will respond with an <span class="inline-code"><pre><code>HTTP 202: Accepted</code></pre></span> code, along with short message:
<div class="code-snippet-container">
<a fs-copyclip-element="click-6" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-6" class="hljs language-javascript">{
"message": "Your cluster is being updated.",
"monitor": "https://api.bonsai.io/clusters/[:slug]",
"status": 202
}</code></pre>
</div>
</div>
<span id="destroy-cluster"></span>
The Bonsai API provides a method to delete a cluster from your account.
No parameters are supported for this action.
An HTTP DELETE call is made to the <span class="inline-code"><pre><code>/clusters/[:slug]</code></pre></span> endpoint.
Bonsai will respond with an HTTP 202: Accepted code, along with a short message:
<div class="code-snippet-container">
<a fs-copyclip-element="click-7" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-7" class="hljs language-javascript">{
"message": "Your cluster is being deprovisioned.",
"monitor": "https://api.bonsai.io/clusters/[:slug]",
"status": 202
}</code></pre>
</div>
</div>
The Bonsai API is currently in its Alpha release phase. It may not be feature-complete, and is subject to change without notice. If you have any questions about the roadmap of the API, please reach out to support.
This introduction to the Bonsai API includes the following sections:
Bonsai provides a REST API at https://api.bonsai.io for managing clusters, exploring plans, and checking out versions and available regions. This API allows customers to create, view, manage and destroy clusters via HTTP calls instead of through the dashboard. The API supports four endpoints:
To interact with the API, users must create an API Token. You can read more about creating those tokens here. An API token will have a key and a secret. The API supports multiple ways of authenticating requests with an API token. All calls to the API using an API token are logged for auditing purposes. Additional constraints on API tokens are in development.
The API generally conforms to RESTful principals. Users interact with their clusters using standard HTTP verbs such as GET, PUT, POST, PATCH and DELETE. The Bonsai API accepts and returns JSON payloads. No other formats are supported at this time.
The Bonsai API accepts request bodies in JSON format only. Request bodies that are not in proper JSON will receive an HTTP 422: Unprocessable Entity response code, along with a JSON body containing messages about the problem.
A <span class="inline-code"><pre><code>Content-Type: application/json</code></pre></span> HTTP header is preferred, but not required. Requests may also provide an <span class="inline-code"><pre><code>Accept</code></pre></span>Accept header, as either <span class="inline-code"><pre><code>Accept: */*</code></pre></span> or <span class="inline-code"><pre><code>Accept: application/json</code></pre></span>. Any other accept type will receive an HTTP 422 error.
The Bonsai API responds with standard HTTP response codes. All HTTP message bodies will be in JSON format. The API documentation for the call will describe the response bodies that a client should expect.
In the event that one or more errors are raised, the API will return a JSON response detailing the problem. The response will have a status code, and an array of error messages. For example:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">{
"errors": [
"This request has failed authentication. Please read the docs or email us at support@bonsai.io."
],
"status": 401
}</code></pre>
</div>
</div>
Error codes for the API are documented in the API Error Codes section.
The Bonsai API limits any given token to 60 requests per minute. Provision requests are limited to 5 per minute. Making too many requests in a short amount of time will result in a temporary period of HTTP 429 responses.
Additionally, access to the API may be blocked if:
The Bonsai team considers the following changes to be backwards compatible, and can be made without advance notice:
The Bonsai team is committed to providing the best, most-reliable platform for deploying and managing Elasticsearch clusters. If you have a question, issue, or just want to submit a feature request, please reach out to our support team at support@bonsai.io.
The Bonsai API is currently in its Alpha release phase. It may not be feature-complete, and is subject to change without notice. If you have any questions about the roadmap of the API, please reach out to support.
An API Token is required in order to access the API. Tokens are associated to a user within a given account. To create a credential, navigate to your account and click on the API Tokens tab:
Click on the Generate Token button to create a token:
Whenever a token is submitted to the API, a log entry is generated for that token. The logs will indicate which token was used, how it was authenticated, who it belongs to (and the IP address of the requester), and a host of other information:
You can see even more details about a request by clicking on its Details button:
If there is some security concern with the request, there will be a flash message indicating that it was not made safely.
To revoke a token, click on the Revoke button. This will bring up a confirmation dialog. Confirm the request, and the token will be revoked. Once a token is revoked, it can no longer be used to access the API. Requests to the API using a revoked token will result in a HTTP 401: Authorization error.
All APIs supporting recognizes the following request parameters for pagination:
<table>
<thead>
<tr>
<th>Parameter</th><th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>page</td><td>The page number, starting at 1</td>
</tr>
<tr>
<td>size</td><td>The size of each page, with a max of 100</td>
</tr>
</tbody>
</table>
All API responses which support pagination will include a top level pagination fragment in the JSON response body, which looks like:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">"pagination": {
"page_number": 1,
"page_size": 20,
"total_records": 255
}</code></pre>
</div>
</div>
With the above information, you can infer how many pages there are and iterate until the list is exhausted.
An HTTP 503: Service Unavailable error indicates a problem with a server somewhere in the network. It is most likely related to a node restart affecting your primary shard(s) before a replica can be promoted.
The easiest solution is to simply catch and retry HTTP 503's. If you've seen this several times in a short period of time, please send us an email and we will investigate.
An HTTP 502: Bad Gateway error is rare, but when it does happen, there are really only two root causes: a problem with the load balancer, or Elasticsearch is returning a high number of deprecation warnings.
If you are seeing occasional, generic messages about HTTP 502 errors, then the most likely cause is the load balancer. The short explanation is that there are a few cases where the proxy software hits an OOM error and is restarted. This causes the load balancer to send back an HTTP 502. The error message will be very generic, and it will not say anything about Bonsai.io. The easiest solution is to simply catch and retry these HTTP 502's.
If you are seeing frequent, repeated HTTP 502 messages, and those messages say something like "A problem occurred with the Elasticsearch response. Please check status.bonsai.io or contact support@bonsai.io for assistance", then it's likely due to Elasticsearch's response overwhelming the load balancer with HTTP headers. These headers might look something like this:
<div class="code-snippet w-richtext">
<pre><code fs-codehighlight-element="code" class="hljs language-javascript"><script> console.log('hello'); </script>"The [string] field is deprecated, please use [text] or [keyword] instead on [my_field]"</code></pre>
</div>
This usually happens when the client sends a request using a large number of deprecated references. Elasticsearch responds with an HTTP header for each one. If there is a large number (many thousands) of headers, the load balancer will simply close the connection and respond with an HTTP 502 message. The solution is to review your client and application code, and either: A) use smaller bulk requests, or B) update the code so that it's no longer using deprecated features.
If you are having trouble with this resolving this issue, please let us know.
An HTTP 400 Bad Request can be caused by a variety of problems. However, it is generally a client-side issue. An HTTP 400 implies the problem is not with Elasticsearch, but rather with the request to Elasticsearch.
For example, if you have a mapping that expects a number in a particular field, and then index a document with some other data type in that field, Elasticsearch will reject it with an HTTP 400:
<div class="code-snippet-container">
<a fs-copyclip-element="click" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript"># Create a document with a field called "views" and the number 0POST /myindex/mytype/1?pretty -d '{"views":0}'
{
"_index" : "myindex",
"_type" : "mytype",
"_id" : "1",
"_version" : 1,
"_shards" : {
"total" : 2,
"successful" : 2,
"failed" : 0
},
"created" : true
}
# Elasticsearch has automagically determined the "views" field to be a long (integer) data type:GET /myindex/_mapping?pretty
{
"myindex" : {
"mappings" : {
"mytype" : {
"properties" : {
"views" : {
"type" : "long"
}
}
}
}
}
}
# Try to create a new document with a string value instead of a long in the "views" field:POST /myindex/mytype/2?pretty -d '{"views":"zero"}'
{
"error" : {
"root_cause" : [ {
"type" : "mapper_parsing_exception",
"reason" : "failed to parse [views]"
} ],
"type" : "mapper_parsing_exception",
"reason" : "failed to parse [views]",
"caused_by" : {
"type" : "number_format_exception",
"reason" : "For input string: \"zero\""
}
},
"status" : 400
}</code></pre>
</div>
</div>
The way to troubleshoot an HTTP 400 error is to read the response carefully and understand which part of the request is raising the exception. That will help you to identify a root cause and remediate.
The HTTP 501 Not Implemented error means that the requested feature is not available on Bonsai. Elasticsearch offers a handful of API endpoints that are not exposed on Bonsai for security and performance reasons. You can read more about these in the Unsupported API Endpoints documentation.
The "Cluster not found"-variant HTTP 404 is distinct from the "Index not found" message. This error message indicates that the routing layer is unable to match your URL to a cluster resource. This can be caused by a few things:
If you have confirmed that: A) the URL is correct (see Connecting to Bonsai for more information), B) the cluster has not been destroyed, and C) the cluster should be up and running, and D) you're still receiving HTTP 404 responses from the cluster, then send us an email and we'll investigate.
This error is raised when an update request is sent to a cluster that has been placed into read-only mode. Clusters can be placed into read-only mode for one of several reasons, but the most common reason is due to an overage.
If you're seeing this error, check on your cluster status and address any overages you see. You can find more information about this in our Metering on Bonsai documentation, specifically the section "Checking on Cluster Status". If you're not seeing any overages and the cluster is still set to read-only, please contact us and let us know.
This error is raised when a request is sent to a cluster that has been disabled. Clusters can be disabled for one of several reasons, but the most common reason is due to an overage.
If you're seeing this error, check on your cluster status and address any overages you see. You can find more information about this in our Metering on Bonsai documentation, specifically the section "Checking on Cluster Status". If you're not seeing any overages and the cluster is still disabled, please contact us and let us know.
All Bonsai clusters are provisioned with a randomly generated set of credentials. These must be supplied with every request in order for the request to be processed. An HTTP 401 response indicates the authentication credentials were missing from the request.
To elaborate on this, all Bonsai cluster URLs follow this format:
<div class="code-snippet w-richtext">
<pre><code fs-codehighlight-element="code" class="hljs language-javascript">https://username:password@hostname.region.bonsai.io
</code></pre>
</div>
The username and password in this URL are not the credentials used for logging in to Bonsai, but are randomly generated alphanumeric strings. So your URL might look something like:
<div class="code-snippet w-richtext">
<pre><code fs-codehighlight-element="code" class="hljs language-javascript"><script> console.log('hello'); </script>https://kjh4k3j:lv9pngn9fs@my-awesome-cluster.us-east-1.bonsai.io
</code></pre>
</div>
The credentials <span class="inline-code"><pre><code>kjh4k3j:lv9pngn9fs</code></pre></span> must be present with all requests to the cluster in order for them to be processed. This is a security precaution to protect your data (on that note, we strongly recommend keeping your full URL a secret, as anyone with the credentials can view or modify your data).
It's possible to get an HTTP 401 response when attempting to access one of the Unsupported API Endpoints. If you're trying to access server level tools, restart a node, etc, then the request will fail, period. Please read the documentation on unavailable APIs to determine whether the failing request is valid.
Please ensure that the credentials are correct. You can find this information on your cluster dashboard. Note that there is a tool for rotating credentials. So it's entirely possible to be using an outdated set of credentials.
Heroku users should also inspect the contents of the <span class="inline-code"><pre><code>BONSAI_URL</code></pre></span> config variable. This can be found in the Heroku app dashboard, or by running <span class="inline-code"><pre><code>heroku config:get BONSAI_URL</code></pre></span>. The contents of this variable should match the URL shown in the Bonsai cluster dashboard exactly.
If you're sure that the credentials are correct and being supplied, send us an email and we will investigate.
In some rare cases, the Bonsai Ops Team will put a cluster into maintenance mode. There are a lot of reasons this may happen:
Maintenance mode blocks updates to the cluster, but not searches. If you're seeing this message, it will be temporary; it rarely lasts for more than a minute or two. If your cluster has been in a maintenance state for more than a few minutes, please contact support.
The HTTP 500 Internal Server Error is both rare and often difficult to reproduce. It generally indicates a problem with a server somewhere. It may be Elasticsearch, but it could also be a node in the load balancer or proxy. A process restarting is typically the root cause, which means it will often resolve itself within a few seconds.
The easiest solution is to simply catch and retry HTTP 500's. If you've seen this several times in a short period of time, please send us an email and we will investigate.
This error indicates that the request body to Elasticsearch exceeded the limits of the Bonsai proxy layer. This can be caused by a few things:
If you're seeing this error, check that your queries are sane and not 40MB of flattened JSON. Ensure you're not explicitly sending lots of headers to your cluster.
If you're seeing this message during bulk indexing, then decrease your batch sizes by half and try again. Repeat until you can reindex without receiving an HTTP 413.
Finally, if it is indeed a large file causing the problem, then the odds are good that metadata and media in the file are resulting in its huge size. You may need to use a some file editing tool to remove the media (images, movies, sounds) and possibly the metadata from the file and then try again. If the files are user-submitted, consider capping the file size your users are able to upload.
Customers who are unable to change their application to accommodate smaller request bodies or payloads should reach out to us. We we lift this limit for customers on Business and Enterprise plans, subject to some caveats on performance.
The proximate cause of HTTP 429 errors occur is that an app has exceeded its concurrent connection limits for too long. This is often due to a spike in usage -- perhaps a new feature as been deployed, a service is growing quickly, or possibly a regression in the code.
It can also happen when reindexing (for example: when engineers want to push all the data into Elasticsearch or OpenSearch as quickly as possible, which means lots of parallelization). Unusually expensive requests, or other unusual latency and performance degradation within Elasticsearch itself can also cause unexpected queuing and result in 429 errors.
In most cases, 429 errors can be solved by upgrading your plan to a plan with higher connection limits; new connection limits are applied immediately. If that's not viable, then you may need to perform additional batching of your updates (such as queuing and bulk updating) or searches (with multi-search API as one example). We have some suggestions for optimizing your requests that can help point you in the right direction.
This response is distinct from the "Cluster not found" message. This message indicates that you're trying to access an index that is not registered with Elasticsearch. For example:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">GET /nonexistent_index/_search?pretty
{
"error" : {
"root_cause" : [ {
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "nonexistent_index",
"index" : "nonexistent_index"
} ],
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "nonexistent_index",
"index" : "nonexistent_index"
},
"status" : 404
}</code></pre>
</div>
</div>
There are a couple reasons you might see this:
By default, Elasticsearch has a feature that will automatically create indices. Simply pushing data into a non-existing index will cause that index to be created with mappings inferred from the data. In accordance with Elasticsearch best practices for production applications, we've disabled this feature on Bonsai.
However, some popular tools such as Kibana and Logstash do not support explicit index creation, and rely on auto-creation being available. To accommodate these tools, we've whitelisted popular time-series index names such as <span class="inline-code"><pre><code>logstash*</code></pre></span>, <span class="inline-code"><pre><code>requests*</code></pre></span>, <span class="inline-code"><pre><code>events*</code></pre></span>, <span class="inline-code"><pre><code>kibana*</code></pre></span>. and <span class="inline-code"><pre><code>kibana-int*</code></pre></span>.
The solution to this error message is to confirm that the index name is correct. If so, make sure it is properly created (with all the mappings it needs), and try again.
The HTTP 504 Gateway Timeout error is returned when a request takes longer than 60 seconds to process, regardless of whether the process is waiting on Elasticsearch or sitting in a connection queue. This can sometimes be due to network issues, and sometimes it can occur when Elasticsearch is IO-bound and unable to process requests quickly. Complex requests are more likely to receive an HTTP 504 error in these cases.
For more information on timeouts, please see our recommendations on Connection Management.
A HTTP 402 response indicates a cluster’s account is behind on payments. All requests to the cluster will return the following message:
<div class="code-snippet w-richtext">
<pre><code fs-codehighlight-element="code" class="hljs language-javascript">{
"code": 402,
"message": "Cluster has been disabled due to non-payment. Please update billing info or contact support@bonsai.io for further details."
}</code></pre>
</div>
The cluster cannot be used until an overdue payment is successfully processed. If an account remains in an unpaid state for more than a week, it will be destroyed and any active clusters will be terminated and all data destroyed.
Please refer to this article on how to check an account’s billing information. If you’ve updated the account’s billing information and confirmed that the credit card on file is working, please contact us and let us know that this response still persists.
Troubleshooting and resolving connection issues can be time consuming and frustrating. This article aims to reduce the friction of resolving connection problems by offering suggestions for quickly identifying a root cause.
If you're seeing connection errors while attempting to reach your Bonsai cluster, the first step is Don't Panic. Virtually all connection issues can be fixed in under 5 minutes once the root cause is identified.
The next step is to review the documentation on Connecting to Bonsai, which contains plenty of information about creating connections and testing the availability of your Bonsai cluster.
If the basic documentation doesn't help resolve the problem, the next step is to read the error message very carefully. Often the error message contains enough information to explain the problem. For example, something like this may show up in your logs:
<div class="code-snippet w-richtext">
<pre><code fs-codehighlight-element="code" class="hljs language-javascript">Faraday::ConnectionFailed (Connection refused - connect(2) for "localhost" port 9200):</code></pre>
</div>
This message tells us that the client tried to reach Elasticsearch over localhost, which is a red flag (discussed below). It means the client is not pointed at your Bonsai cluster.
Next, look for an HTTP error code. Bonsai provides an entire article dedicated to explaining what different HTTP codes mean here: HTTP Error Codes. Like the error message, the HTTP error code often provides enough diagnostic information to identify the underlying problem.
Last, don't make any changes until you understand the error message and HTTP code (if one is present), and have read the relevant documentation. Most of these errors occur during the initial set up and configuration step, so also read the relevant Quickstart guide if one exists for your language / framework.
There are issues which do not return HTTP error codes because the request can not be made at all, or because the request is not recognized by Elasticsearch. These issues are commonly one of the following:
This error indicates that a request was made to a server that is not accepting connections.
This error most commonly occurs when your Elasticsearch client has not been properly configured. By default, many clients will try to connect to something like `localhost:9200`. This is a problem because Bonsai will never be running on your localhost or network, regardless of your platform.
Simply, localhost can be interpreted as "this computer." Which computer is "this" computer is determined by whatever machine is running the code. If you're running the code on your laptop, then localhost is the machine in front of you; if you're running the code on AWS or in Heroku, then the localhost is the node running your application.
Bonsai clusters run in a private network that does not include any of your infrastructure, which is why trying to reach it via localhost will always fail.
By default, Elasticsearch runs on port 9200. While you can access your Bonsai cluster over port 9200, this is not recommended due to lack of encryption in transit.
All users will all need to ensure their Elasticsearch client is pointed at the correct URL and ensure that the Elasticsearch client is properly configured.
If you’re using the elastisearch-rails client, simply add the following gem to your Gemfile:
<div class="code-snippet-container">
<a fs-copyclip-element="click-2" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-2" class="hljs language-javascript">gem 'bonsai-elasticsearch-rails'</code></pre>
</div>
</div>
The bonsai-elasticsearch-rails gem is a shim that configures your Elasticsearch client to load the cluster URL from an environment variable called <span class="inline-code"><pre><code>BONSAI_URL</code></pre></span>. You can read more about it on the project repository.
If you'd prefer to keep your Gemfile sparse, you can initialize the client yourself like so:
<div class="code-snippet-container">
<a fs-copyclip-element="click-3" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-3" class="hljs language-javascript"># config/initializers/elasticsearch.rb
Elasticsearch::Model.client = Elasticsearch::Client.new url: ENV['BONSAI_URL']</code></pre>
</div>
</div>
If you opt for this method, make sure to add the <span class="inline-code"><pre><code>BONSAI_URL</code></pre></span> to your environment. It will be automatically created for Heroku users. Users managing their own application environment will need to run something like:
<div class="code-snippet-container">
<a fs-copyclip-element="click-4" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-4" class="hljs language-javascript">$ export BONSAI_URL="https://randomuser:randompass@something-12345.us-east-1.bonaisearch.net"</code></pre>
</div>
</div>
The Elasticsearch client is probably using the default <span class="inline-code"><pre><code>localhost:9200</code></pre></span> or <span class="inline-code"><pre><code>127.0.0.1:9200</code></pre></span> (127.0.0.1 is the IPv4 equivalent of "localhost"). You'll need to make sure that the client is configured to use the correct URL for your cluster, and that this configuration is not being overwritten somewhere.
The Elasticsearch API has a variety of endpoints defined, like <span class="inline-code"><pre><code>/_cat/indices</code></pre></span>. Each of these endpoints can be called with a specific HTTP method. This error simply indicates a request was made to an endpoint using the wrong method.
Here are some examples:
<div class="code-snippet-container">
<a fs-copyclip-element="click-5" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-5" class="hljs language-javascript">
# Request to /_cat/indices with GET method:
$ curl -XGET https://user:pass@something-12345.us-east-1.bonsai.io/_cat/indices
green open test1 1 1 0 0 318b 159b
green open test2 1 1 0 0 318b 159b
# Request to /_cat/indices with PUT method:
$ curl -XPUT https://user:pass@something-12345.us-east-1.bonsai.io/_cat/indices
No handler found for uri [/_cat/indices] and method [PUT]</code></pre>
</div>
</div>
The solution for this issue is simple: use the correct HTTP method. The Elasticsearch documentation will offer guidance on which methods pertain to the endpoint you're trying to use.
The Domain Name System (DNS) is a worldwide, decentralized and distributed directory service which translates human-readable domains like www.example.com in to network addresses like 93.184.216.119. When a client makes a request to a URL like "google.com," the application's networking layer will use DNS to translate the domain to an IP address so that it can pass the request along to the right server.
The "Name or service not known" error indicates that there has been a failure in determining an IP address for the URL's domain. This typically implicates one of several root causes:
The first troubleshooting step is to carefully read the error and double-check that the URL (particularly the domain name) is spelled correctly. If this is your first time accessing the cluster, then a typo is almost certainly the problem.
If this error arises during regular production use, then there is probably a DNS outage. DNS outages are outside of Bonsai's control. There are a couple types of outages, but the ones that have affected our users before are:
A TLD is something like ".com," ".net," ".org," etc. A TLD outage affects all domains under the TLD. So an outage of the ".net" TLD would affect all domains with the ".net" suffix.
Fortunately, all Bonsai clusters can be accessed via either of two domains:
If there is a TLD outage, you should be able to restore service by switching to the other domain. In other words, if your application is sending traffic to <span class="inline-code"><pre><code>https://user:pass@something-12345.us-east-1.bonsai.io</code></pre></span>, and the ".io" TLD goes down, you can switch over to <span class="inline-code"><pre><code>https://user:pass@something-12345.us-east-1.bonsaisearch.net</code></pre></span>, and it will fix the error.
Many ISPs operate their own DNS servers. This way, requests made from a node in their network can get low-latency responses for IP addresses. Most ISPs also have a fleet of DNS servers, at minimum a primary and a secondary. However, this is not a requirement, and there have been instances where an ISP's entire DNS service is taken offline.
There are also multitenant services which have Software Defined Networks (SDN) running Internal DNS (iDNS). Regressions in this software can also lead to application-level DNS name resolution problems.
If you've already confirmed the domain is correct and swapped to another TLD for your cluster, and you're still having issues, then you are probably dealing with an ISP/DNS or SDN/iDNS outage. One way to confirm this is to try making requests to other common domains like google.com. A name resolution error on google.com or example.com points to a local DNS problem.
If this happens, there is basically nothing you can do about it as a user, aside from complaining to your application sysadmin.
Users who are seeing persistent HTTP 401 Error Codes may be using a client that is not handling authentication properly. As explained in the error code documentation, as well as in Connecting to Bonsai, all Bonsai clusters have a randomly-generated username and password which must be present in the request in order for it to be accepted.
What's less clear from this documentation is that including the complete URL in your Elasticsearch client may not be enough to create a secure connection.
This is due to how HTTP Basic Access Authentication works. In short, there needs to be a request header present. This header has an "Authorization" field and a value of <span class="inline-code"><pre><code>Basic</code></pre></span> . The base64 string is a base64 representation of the username and password, concatenated with a colon (":").
Most clients handle the headers for you automatically and in the background. But not all do, especially if the client is part of a bleeding edge language or framework, or if it's something homebrewed/forked/patched.
Here is a basic example in Ruby using <span class="inline-code"><pre><code>Net::HTTP</code></pre></span>, demonstrating how a URL with auth can still receive an HTTP 401 response:
<div class="code-snippet-container">
<a fs-copyclip-element="click-6" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-6" class="hljs language-javascript">require 'base64'
require 'net/http'
# URL with credentials:
uri = URI("https://randomuser:randompass@something-12345.us-east-1.bonsaisearch.net")
# Net::HTTP does not automatically detect the presence of
# authentication credentials and insert the proper Authorization header.req = Net::HTTP::Get.new(uri)
# This request will fail with an HTTP 401, even though the credentials
# are in the URI:
res = Net::HTTP.start(uri.hostname, uri.port, :use_ssl => true) {|http|
http.request(req)}
# The proper header must be added manually:
credentials = "randomuser:randompass"
req['Authorization'] = "Basic " + Base64::encode64(credentials).chomp
# The request now succeeds
res = Net::HTTP.start(uri.hostname, uri.port, :use_ssl => true) {|http|
http.request(req)
}</code></pre>
</div>
</div>
From the Ruby example, it's clear that there are some cases where the credentials are simply ignored instead of being automatically put in to a header. This causes the Basic authentication to fail, and receive the HTTP 401.
Simply, if you're seeing HTTP 401 responses even while including the credentials in the URL, and you've confirmed that the credentials are entered correctly and have not been expired, then the problem is probably a missing header. You can detect this with tools like socat or wireshark if you're familiar with network traffic inspection. Or, you can try adding the headers manually.
Here are some examples of calculating the base64 string and adding the request header in several different languages:
<div class="code-snippet-container">
<a fs-copyclip-element="click-7" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-7" class="hljs language-javascript"> public static Map getHeaders() {
Map headers = new HashMap<>();
String credentials = "randomuser:randompass";
String auth = "Basic " + Base64.encodeToString(credentials.getBytes(), Base64.NO_WRAP);
headers.put("Authorization", auth);
headers.put("Content-type", "application/json");
return headers;
}</code></pre>
</div>
</div>
<div class="code-snippet-container">
<a fs-copyclip-element="click-8" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-8" class="hljs language-java">public static Map getHeaders() {
Map headers = new HashMap<>();
String credentials = "randomuser:randompass";
String auth = "Basic " + Base64.getEncoder().encodeToString(credentials.getBytes());
headers.put("Authorization", auth);
headers.put("Content-type", "application/json");
return headers;
}</code></pre>
</div>
</div>
<div class="code-snippet-container">
<a fs-copyclip-element="click-9" href="#" class="btn w-button code-copy-button" title="Copy">
<img class="copy-image" src="https://global-uploads.webflow.com/63c81e4decde60c281417feb/6483934eeefb356710a1d2e9_icon-copy.svg" loading="lazy" alt="">
<img class="copied-image" src="https://cdn.prod.website-files.com/63c81e4decde60c281417feb/64839e207c2860eb9e6aa572_icon-copied.svg" loading="lazy" alt="">
</a>
<div class="code-snippet">
<pre><code fs-codehighlight-element="code" fs-copyclip-element="copy-this-9" class="hljs language-ruby">require 'base64'
require 'net/http'
uri = URI("https://something-12345.us-east-1.bonsaisearch.net")
req = Net::HTTP::Get.new(uri)
credentials = "randomuser:randompass"
req['Authorization'] = "Basic " + Base64::encode64(credentials).chomp
res = Net::HTTP.start(uri.hostname, uri.port, :use_ssl => true) {|http|
http.request(req)
}</code></pre>
</div>
</div>
As mentioned in the Security documentation, all Bonsai clusters support SSL/TLS. This enables your traffic to be encrypted over the wire. In some rare cases, users may see something like this when trying to access their cluster over HTTPS:
<div class="code-snippet w-richtext">
<pre><code fs-codehighlight-element="code" class="hljs language-javascript">SSL: CERTIFICATE_VERIFY_FAILED</code></pre>
</div>
This is almost certainly due to a server level misconfiguration.
A complete examination of how SSL works is well outside the scope of this article, but in short: it utilizes cryptographic signatures to facilitate a chain of trust from a root certificate issued by a certificate authority (CA) to a certificate deployed to a server. The latter is used to prove the server's ownership before initiating public key exchange.
Think of a certificate authority as a mediator of sorts. A company like Bonsai goes to the CA and provides proof of identity and ownership of a domain