Control access into the service mesh with Consul API gateway
Consul API Gateway is a dedicated ingress solution for intelligently routing traffic to applications on your Consul service mesh. This provides a consistent method for handling inbound requests to the service mesh from external clients.
Consul API Gateway takes all API calls from clients, then routes them to the appropriate service with request routing, composition, and protocol translation. Once Consul API Gateway becomes available for your services, you can use it for ingress, load balancing, modifying HTTP headers, and splitting traffic between multiple services based on weighted ratios.
In this tutorial, you will:
- Deploy a HashiCorp Cloud Platform (HCP) Consul cluster and an Elastic Kubernetes Service (EKS) cluster with Terraform
- Deploy Consul dataplane to EKS cluster
- Deploy example applications (HashiCups and echo)
- Deploy Consul API Gateway
- Apply API gateway routes to enable ingress to HashiCups
- Apply API gateway routes to load balance echo services
Prerequisites
The tutorial assumes that you are familiar with Consul and its core functionality. If you are new to Consul, refer to the Consul Getting Started tutorials collection.
For this tutorial, you will need:
- An HCP account configured for use with Terraform
- An AWS account configured for use with Terraform
- kubectl >= 1.28
- aws-cli >= 2.13.19
- terraform >= 1.5.7
- consul-k8s v1.2.1
- helm >= 3.12.3
Clone GitHub repository
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp-education/learn-consul-api-gateway
Change into the directory with the newly cloned repository.
$ cd learn-consul-api-gateway/hcp/
This repository contains Terraform configuration to spin up the initial infrastructure and all files to deploy Consul, the sample application, and the API Gateway resources.
Here, you will find the following Terraform configuration:
eks-cluster.tf
defines Amazon EKS cluster deployment resourceshcp-consul.tf
defines the HCP Consul Dedicated and network peering resourcesNote
By default, this tutorial creates a "development" size tier HCP Consul Dedicated Cluster. Development tiers are single-server Consul clusters recommended for testing or evaluation purposes only. For production, we recommend using the "standard" or "plus" tier because each Consul datacenter will have the recommended three server nodes.
hcp-hvn.tf
defines the HashiCorp Virtual Network (HVN) resourcesoutputs.tf
defines outputs you will use to authenticate and connect to your Kubernetes clusterproviders.tf
defines AWS and HCP provider definitions for Terraformvariables.tf
defines variables you can use to customize the tutorialvpc.tf
defines the AWS VPC resources
Additionally, you will find the following directories:
api-gw
contains the Kubernetes custom resource definitions (CRDs) required to deploy and configure the API gateway resourcesconsul
contains the Helm chart that configures your Consul instancek8s-services
contains the Kubernetes definitions that deploys HashiCups and Echo sample applications
Deploy infrastructure, Consul, and sample applications
Initialize your Terraform configuration to download the necessary providers and modules.
$ terraform initInitializing the backend...Initializing provider plugins...## ...Terraform has been successfully initialized!## ...
Then, create the infrastructure. Confirm the run by entering yes
. This will take about 15 minutes to deploy your infrastructure. Feel free to explore the next sections of this tutorial while waiting for the resources to deploy.
$ terraform apply## ...Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes## ...Apply complete! Resources: 74 added, 0 changed, 0 destroyed.
Configure your terminal to communicate with EKS
Now that you have deployed the Kubernetes cluster, configure kubectl
to interact with it.
$ aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw kubernetes_cluster_id)
Install Consul
You will now deploy Consul on your Kubernetes cluster with consul-k8s
. By default, Consul deploys into its own dedicated namespace (consul
). The Consul installation will use the Consul Helm chart file in the consul
directory.
Notice this file defines API gateway in the connectInject
stanza. Since this tutorial uses an EKS cluster, the API Gateway will create an AWS load balancer to handle network ingress. Refer to the Helm chart for more information.
consul/values.yaml
global: enabled: true name: consul datacenter: dc1connectInject: transparentProxy: defaultEnabled: true enabled: true default: true apiGateway: managedGatewayClass: serviceType: LoadBalancer
Deploy Consul and confirm the installation with a y
.
$ consul-k8s install -config-file=consul/values.yaml==> Checking if Consul can be installed ✓ No existing Consul installations found. ✓ No existing Consul persistent volume claims found ✓ No existing Consul secrets found.==> Consul Installation Summary Name: consul Namespace: consul Helm value overrides -------------------- connectInject: apiGateway: managedGatewayClass: serviceType: LoadBalancer##... ✓ Consul installed in namespace "consul".
Verify that you have installed Consul by inspecting the Kubernetes pods in the consul
namespace.
$ kubectl --namespace=consul get podsNAME READY STATUS RESTARTS AGEconsul-connect-injector-8bf47f66-6wjfn 1/1 Running 0 3m32sconsul-webhook-cert-manager-5d67468847-kh6qd 1/1 Running 0 3m32s
Deploy sample applications
Now that your Consul service mesh is operational in your cluster, deploy the two sample applications so you can explore API Gateway for ingress and load balancing.
Deploy the HashiCups and echo services.
$ kubectl apply --filename k8s-services/
Check the pods to make sure they are all up and running.
$ kubectl get podsNAME READY STATUS RESTARTS AGEecho-1-6469764ff6-drqnh 2/2 Running 0 39secho-2-d7b7b9599-4w9mr 2/2 Running 0 38sfrontend-7d9774d4c5-ctxmb 2/2 Running 0 38sfrontend-v2-c67dc467c-kt7vp 2/2 Running 0 38spayments-b4f5c6c58-t8cr7 2/2 Running 0 36sproduct-api-74c5f98f64-9nltt 2/2 Running 0 35sproduct-api-db-6c49b5dcb4-rrzss 2/2 Running 0 36spublic-api-5dc47dd74-5d6kx 3/3 Running 0 35s
Deploy API Gateway
A complete API Gateway deployment consists of an API Gateway configuration and a routing configuration. In this section, you will review the API Gateway configuration files, then deploy it.
API Gateway consists of multiple components that enable external traffic into your Consul service mesh. The configuration file specifies how Consul API Gateway will handle API calls from clients and how it will route them to the respective services with request routing, composition, and protocol translation.
Inspect the ./api-gw/consul-api-gateway.yaml
file contents in your current directory.
./api-gw/consul-api-gateway.yaml
apiVersion: gateway.networking.k8s.io/v1beta1kind: Gatewaymetadata: name: api-gateway namespace: consulspec: gatewayClassName: consul listeners: # options: HTTP or HTTPS - protocol: HTTP # options: 80 or 443 or custom port: 80 name: http allowedRoutes: namespaces: # options: All or Same or Specific from: All##...
This configuration file defines a Gateway
object. This object is the main infrastructure resource that links all other related configuration information together. The spec itself defines listener and address details. Refer to the Gateway
documentation for more information.
This configuration file also defines other objects needed for the deployment of the Consul API Gateway, such as ReferenceGrant
, ClusterRoleBinding
and ClusterRole
. The reference grant lets API Gateway route traffic to services in different namespaces, and the RBAC ClusterRole objects let the API gateway interact with Consul datacenter resources.
$ kubectl apply --filename api-gw/consul-api-gateway.yamlgateway.gateway.networking.k8s.io/api-gateway createdreferencegrant.gateway.networking.k8s.io/consul-reference-grant createdclusterrolebinding.rbac.authorization.k8s.io/consul-auth-binding createdclusterrolebinding.rbac.authorization.k8s.io/consul-api-gateway-tokenreview-binding createdclusterrole.rbac.authorization.k8s.io/consul-api-gateway-auth createdclusterrolebinding.rbac.authorization.k8s.io/consul-api-gateway-auth-binding created
Verify you have deployed API Gateway. You should find an output similar to the following.
$ kubectl get services --namespace=consul api-gatewayNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEapi-gateway LoadBalancer 172.20.167.0 a942e82c578ea4713bcd552f3c193913-2027307265.us-west-2.elb.amazonaws.com 80:30177/TCP 43s
Export the API gateway external IP address. You will reference this URL in the next sections to confirm you have configured the routes for ingress and load balancing.
$ export APIGW_URL=$(kubectl get services --namespace=consul api-gateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') && echo $APIGW_URLa336ea2854e1c4f3294470eed4975c42-388180783.us-west-2.elb.amazonaws.com
Apply API gateway routes for ingress
Routes tell your Consul API Controller how to handle traffic into your service mesh.
The following diagram shows your existing Kubernetes cluster, the Consul API Gateway, and the HashiCups web frontend application. In this section, you will review the HTTPRoute
definitions for HashiCups and deploy it to enable ingress traffic to the frontend application.
The ./api-gw/ingress-hashicups-frontend.yaml
file defines an intention and two routes.
The
api-gateway-hashicups
intention lets traffic flow from the API gateway to HashiCup'sfrontend
service../api-gw/ingress-hashicups-frontend.yaml
apiVersion: consul.hashicorp.com/v1alpha1kind: ServiceIntentionsmetadata: name: api-gateway-hashicupsspec: destination: name: frontend sources: - name: api-gateway action: allow
The
route-root
HTTPRoute directs traffic from/
to thefrontend
service where the HashiCups demo application web frontend runs. TheparentRefs
binds the route to the API gateway. The rules and matches define the conditions used to match the HTTP request to the service. Refer to theRoute
documentation for more information../api-gw/ingress-hashicups-frontend.yaml
apiVersion: gateway.networking.k8s.io/v1beta1kind: HTTPRoutemetadata: name: route-root namespace: defaultspec: parentRefs: - name: api-gateway namespace: consul rules: - matches: - path: type: PathPrefix value: / backendRefs: - kind: Service name: frontend port: 3000
The
route-hashicups
HTTPRoute performs an URL rewrite and redirects traffic from/hashicups
to the root address of the API Gateway (where the HashiCups web frontend is served fromroute-root
)../api-gw/ingress-hashicups-frontend.yaml
apiVersion: gateway.networking.k8s.io/v1beta1kind: HTTPRoutemetadata: name: route-hashicups namespace: defaultspec: parentRefs: - name: api-gateway namespace: consul rules: - matches: - path: type: Exact value: /hashicups backendRefs: - kind: Service name: frontend namespace: default port: 3000 filters: - type: URLRewrite urlRewrite: path: replacePrefixMatch: / type: ReplacePrefixMatch
Apply the intention and API Gateway routes.
$ kubectl apply --filename api-gw/ingress-hashicups-frontend.yamlserviceintentions.consul.hashicorp.com/api-gateway-hashicups createdhttproute.gateway.networking.k8s.io/route-root createdhttproute.gateway.networking.k8s.io/route-hashicups created
Retrieve the API Gateway URL to confirm you have enabled ingress to HashiCups on both the root and hashicups
path. Open the URLs in your browser to view the HashiCups UI. The frontend service will load, however no item is present because the HashiCups API has not been exposed yet.
$ echo "http://$APIGW_URL" && echo "http://$APIGW_URL/hashicups"http://a336ea2854e1c4f3294470eed4975c42-388180783.us-west-2.elb.amazonaws.comhttp://a336ea2854e1c4f3294470eed4975c42-388180783.us-west-2.elb.amazonaws.com/hashicups
Apply API gateway routes for HashiCups API ingress
Now, you will expose the HashiCups API so that the frontend can query the available coffees from the API.
The following diagram shows your existing Kubernetes cluster, the Consul API Gateway, the HashiCups frontend service, and the HashiCups public API service. In this section, you will review the HTTPRoute
definitions for the HashiCups public API and deploy it to enable ingress traffic to the HashiCups API application.
The ./api-gw/ingress-hashicups-api.yaml
file defines an intention and one routes.
The
api-gateway-publicapi
intention lets traffic flow from the API gateway to HashiCup'spublic-api
service../api-gw/ingress-hashicups-api.yaml
apiVersion: consul.hashicorp.com/v1alpha1kind: ServiceIntentionsmetadata: name: api-gateway-publicapispec: destination: name: public-api sources: - name: api-gateway action: allow
The
route-api
HTTPRoute directs traffic from/api
to thepublic-api
service where the HashiCups public API service serves information related to the coffees in the HashiCups inventory. TheparentRefs
binds the route to the API gateway. The rules and matches define the conditions used to match the HTTP request to the service. Refer to theRoute
documentation for more information../api-gw/ingress-hashicups-api.yaml
apiVersion: gateway.networking.k8s.io/v1beta1kind: HTTPRoutemetadata: name: route-api namespace: defaultspec: parentRefs: - name: api-gateway namespace: consul rules: - matches: - path: type: Exact value: /api backendRefs: - kind: Service name: public-api namespace: default port: 8080
Apply the intention and API Gateway route.
$ kubectl apply --filename api-gw/ingress-hashicups-api.yamlserviceintentions.consul.hashicorp.com/api-gateway-publicapi createdhttproute.gateway.networking.k8s.io/route-api created
Retrieve the API Gateway URL to confirm you have enabled access to the public-api
service for the HashiCups items to be visible in the store. Open the URLs in your browser to view the HashiCups UI.
$ echo "http://$APIGW_URL"http://a336ea2854e1c4f3294470eed4975c42-388180783.us-west-2.elb.amazonaws.com
Apply API gateway routes for load balancing
You can also use API Gateway to load balance services within your Consul service mesh.
The following diagram shows your existing Kubernetes cluster, the Consul API Gateway, and the echo sample services. In this section, you will review the HTTPRoute
definitions for the echo service and deploy it to split traffic evenly between echo-1
and echo-2
.
The ./api-gw/ingress-echo-loadbalance.yaml
file defines two intentions and a route.
The
api-gateway-echo-1
intention lets traffic flow from the API gateway to the first echo service (echo-1
)../api-gw/ingress-echo-loadbalance.yaml
apiVersion: consul.hashicorp.com/v1alpha1kind: ServiceIntentionsmetadata: name: api-gateway-echo-1spec: destination: name: echo-1 sources: - name: api-gateway action: allow
The
api-gateway-echo-2
intention lets traffic flow from the API gateway to the second echo service (echo-2
). You need to define both intentions since API gateway needs access to send traffic to both echo services../api-gw/ingress-echo-loadbalance.yaml
apiVersion: consul.hashicorp.com/v1alpha1kind: ServiceIntentionsmetadata: name: api-gateway-echo-2spec: destination: name: echo-2 sources: - name: api-gateway action: allow
The
route-echo
HTTPRoute splits traffic between the two echo services. The route defines a weight of50
for both services which evenly distribute traffic to the services../api-gw/ingress-echo-loadbalance.yaml
apiVersion: gateway.networking.k8s.io/v1beta1kind: HTTPRoutemetadata: name: route-echo namespace: defaultspec: parentRefs: - name: api-gateway namespace: consul rules: - matches: - path: type: PathPrefix value: /echo backendRefs: - kind: Service name: echo-1 port: 8080 weight: 50 - kind: Service name: echo-2 port: 8090 weight: 50
Apply the intentions and API Gateway route.
$ kubectl apply --filename api-gw/ingress-echo-loadbalance.yamlserviceintentions.consul.hashicorp.com/api-gateway-echo-1 createdserviceintentions.consul.hashicorp.com/api-gateway-echo-2 createdhttproute.gateway.networking.k8s.io/route-echo created
Visit the API gateway's /echo
path several times. Notice how API Gateway alternates requests between the two different services.
$ for i in `seq 1 10`; do echo -n "$i. " && curl -s $APIGW_URL | sed -n 's/.*\(HashiCups-v1\).*/\1/p;s/.*\(HashiCups-v2\).*/\1/p' && echo ""; done1. HashiCups-v12. HashiCups-v23. HashiCups-v14. HashiCups-v25. HashiCups-v16. HashiCups-v27. HashiCups-v18. HashiCups-v29. HashiCups-v110. HashiCups-v2
Clean up environment
Destroy the Terraform resources to clean up your environment. Enter yes
to confirm the destroy operation.
$ terraform destroy
Due to race conditions with the various cloud resources created in this tutorial, you may need to run the destroy
operation twice to ensure all resources have been properly removed.
Next steps
In this tutorial, you used API Gateway as an ingress solution for routing traffic to the applications running on your HashiCorp Consul service mesh. In the process, you learn the benefits of using API Gateway for secure traffic ingress to multiple services and load balancing. Using API Gateway as your dedicated ingress solution eliminates the need to install and manage additional applications for handling traffic ingress.
Feel free to explore these tutorials and collections to learn more about Consul service mesh, microservices, and Kubernetes security.