Release Orchestration means the management, direction, and automation of a software release (or Go Live) process. All the steps, checks, gates and validations a typical software release go through can be managed by a Release Orchestration system like Vamp. Steps can include a certain percentage of visitors, and/or a specific visitor-segment using Layer7 elements like: geo-location, device-type, headers, cookies or IP addresses. Multiple steps can be chained one after the other. For each step, a Release Orchestration system can validate for both technical and business metrics if the release is still within acceptable quality-range. These ranges are typically defined by a combination of DevOps and SRE “golden metrics”, thresholds, and error-budgets, SLA/SLO based targets, and ML-powered anomaly detection. There can also be rolled-up top-line metrics, like Vamp Health (tm) that give an “early warning signal” when the quality of a release decreases. A Release Orchestration system can orchestrate releases within environments (intra-cluster), and over different environments and clusters (inter-cluster). A Release Orchestration system typically uses traffic-shaping mechanisms like proxies, loadbalancers, ingresses and service-meshes for it’s segmentation.
Continuous Delivery or Deployment is the technical act of making sure runtime artifacts (like containerised services and applications), that come out of a CI pipeline, are being deployed to a specific environment, and are running correctly. Releasing is the process of “going live” with the new software version, in a controlled and safe way. It’s customer, business and value focused, instead of technically focused.
Vamp is a Release Orchestration system. It extends and enriches your current CI/CD solution with automated and smart features to get a controlled, safe and scalable release process in place. So: not really, allthough Vamp Vamp has some elements of very advanced CD tools
No, Vamp’s DNA is very much “cloud native” and in such it works tightly integrated with Kubernetes and k8s-related cloud-native components. But Vamp can also apply it’s release-automation with non-k8s traffic-shaping mechanisms like HAProxy and Nginx, or Cloud-vendor loadbalancers like AWS ALB. Vamp’s event and metric detection, validation and ML processes can also be fed with non-k8s metric stores, datasources and other systems like AWS Cloudwatch, Google Stackdriver, event-streams like Kafka, SQS or NATS, Google Analytics, Sentry.io etc.
You can configure a service mesh to create a static routing or traffic-split situation. But the service-mesh knows nothing about when it’s safe to go to the next release-step (or rollback), and also nothing about what this step needs to be. In other words, a service-mesh is an abstraction layer on top of proxies. A Release Orchestration system like Vamp is a smart engine that makes use of and drives a service-mesh (and or other components like Ingress and external loadbalancers) by providing it with dynamic configuration, and continuously observing the effects and results of that configuration.
Feature flags and toggles are awesome, and perfect to use when you are still developing certain features, but you don’t want to expose them to your audience yet, but you want to be able to build and deploy your code. Or if you want to have semi/permanent feature packs to customer segments. Canary Releasing, which is an important pattern and feature in Release Orchestration systems like Vamp, are all about creating a safe. fast and automated release process of going live with your new software functionalities or updates. We believe that Layer4 and Layer7 are a better solution to create traffic splits, conditional routing and dynamic releasing, because you don’t need to instrument your code, there is no code overhead by adding additional logic in your code, and the L4/L7 mechanisms are already there anyway.
Vamp doesn’t add any components inside your application end-2-end flow, it simply dynamically configures the L4/L7 traffic shaping mechanisms that you already have in your infrastructure anyway. More complex rules can add negligible overhead, but solutions like HAproxy, Nginx and Envoy are super-efficient.
Very. We use best-practices like mTLS, RBAC, Auth0 and Hashicorp Vault time-based tokens. Only rolled-up and sanitized high-level metrics are passed from the client infrastructure to our central management environment. All data is isolated and encrypted. Vamp is very Enterprise savvy and has this onboard by design.
Yes, we can provide a managed private Vamp Cloud environment for you to deploy and run on-premises on your own infrastructure. This would only be a non-standard implementation, so ask sales for how this looks commercially versus the Vamp Cloud standard implementation.
No, Vamp can also work with plain Ingress solutions like Nginx and Contour (Envoy based). Vamp’s release agent will check if the selected ingress solution is already installed and running, and if not, will automatically install and configure it for you.
Yes, Vamp also has support for “headless” services. We have support for KNative and AWS Fargate in Q1 2021
See our pricing page www.vamp.io/pricing. Reach out to us for special discounts for opensource, community and non-profit projects.
Yes you can. Vamp offers upgrades to service levels to enterprise level. Go to www.vamp.io/pricing for the details