Kong Enterprise provides you the ability to rate limit the traffic for various objects using the Rate Limiting Advanced Plugin. In the example below, we will rate limit a service fronted by Kong Enterprise.
We will use our existing Kong Enterprise on RHEL 7 environment. The installation process for this environment is documented here.
First lets make sure we have an existing service we can use. If your environment needs to have a service created, you can also check out our blog on how to do so here.
We will also be using the RBAC controls and the user we set up in our blog post. If you have not yet setup RBAC you can learn how to do so here.
1) Create a service that we can use for this example
Log in to the Kong portal at https://<kong_FQDN_or_IP>:8445 and navigate to your chosen Workspace -> Services -> New Service
Fill in the fields for Service Name, Host, Path, Port and other fields as necessary
You can also run the step of creating a Service via the command line in the format below:
curl -i -X POST --url http://<kong_FQDN_or_IP>:8001/services --data 'name=DemoService' --data 'url=myurl.com'
Check to make sure the Service was created successfully by navigating through the console
Or running the following command line:
curl -i -X GET --url "http://<kong_FQDN_or_IP>:8001/services" --header "Kong-Admin-Token: rbac_user_token_1"
2) Next we will add a route for this service
curl -i -X POST --url "http://<kong_FQDN_or_IP>:8001/services/DemoService/routes" --data "hosts[]=mydemoexample.com" --header "Kong-Admin-Token: rbac_user_token_1"
3) Use the rate limiting plugin with our defined service
curl -i -X POST --url "http://<kong_FQDN_or_IP>:8001/services/DemoService/plugins" --data "name=rate-limiting-advanced" --data "config.sync_rate=0" --data "config.window_size=60" --data "config.limit=2" --header "Kong-Admin-Token: rbac_user_token_1"
This configuration means that the DemoService service should not be allowed to process more than 2 requests per 60 seconds period.
4) Now we will test running more than 2 requests against the DemoService service.
After running the request below more than twice
curl -i -X GET --url "http://<kong_FQDN_or_IP>:8000/" --header "Host: mydemoexample.com" --header "Kong-Admin-Token: rbac_user_token_1"
We get the following message:
HTTP/1.1 429 Too Many Requests
By controlling the volume of requests to a specific service, and by adding RBAC controls in front of it, you can secure a quasi-firewall for east-west traffic against internal networking vulnerabilities.
If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected]
[post_title] => Kong Enterprise - How to Setup the Rate Limiting Advanced Plugin [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => kong-enterprise-how-to-setup-the-rate-limiting-advanced-plugin [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:18:35 [post_modified_gmt] => 2022-01-26 13:18:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2149 [menu_order] => 11 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 2133 [post_author] => 2 [post_date] => 2020-01-15 09:49:55 [post_date_gmt] => 2020-01-15 09:49:55 [post_content] =>If you've used the community version of the Kong API gateway, you have probably noticed that anyone that knows the server name or IP for your Kong community API gateway can access and modify existing objects including services and routes. To set up and use role-based access control Kong Enterprise version provides additional capabilities.
In this example, we will leverage the Kong Enterprise on RHEL 7 lab instance we set up earlier. You can read the install steps here.
Before getting started, please make sure enforce_rbac=on
is in the kong.conf file.
Log in to https://<Kong-Enterprise-VM-IP>:8445/login using kong_admin as the username and the password you set during the install process (this is the same password you assigned during the step of EXPORT_PASSWORD='password')
Click on Teams -> RBAC Users
Create a new user rbac_user_1 with a token of rbac_user_token_1
Make sure that enabled checkbox is checked
Add roles –> admin
Note that we are creating this user with 'admin' permissions, but not 'super-admin'. So it will have access to all endpoints, across all workspaces—except RBAC Admin API.
A new RBAC user, rbac_user_1, gets created
Now let's try and test the RBAC setup. We will use Postman (https://www.getpostman.com/) for this example.
First we will create a new Collection labeled 'Kong Enterprise' and then a new Request within that Collection called 'Get Services'.
Next, we will try to run a GET request against https://<Kong-Enterprise-VM-IP>:8445/services to list out all available services. If you don't pass any headers or credentials, you get the error notification "Invalid credentials. Token or User credentials required".
By adding the header with Kong-Admin-Token and the value of the token set in the earlier step 'rbac_user_token_1', we try to run the request again and this time it succeeds
As you can see, with RBAC enabled, Kong Enterprise provides much greater control over who can access and modify various objects. The user permissions can be tailored to suit various team needs – depending upon how granular you want access to be.
If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected]
[post_title] => Setting up Role-Based Access Control (RBAC) with Kong Enteprise [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => setting-up-role-based-access-control-rbac-with-kong-enteprise [to_ping] => [pinged] => [post_modified] => 2020-05-03 18:02:12 [post_modified_gmt] => 2020-05-03 18:02:12 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2133 [menu_order] => 13 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 2119 [post_author] => 7 [post_date] => 2020-01-10 10:05:33 [post_date_gmt] => 2020-01-10 10:05:33 [post_content] =>We all know that Agile is has been around for a while now. You have probably heard about it a thousand times already over the past few years. But even today there are organizations (e.g. Utilities – that prefer CapEx) that rely heavily on the waterfall model for software development – whereby all the project and implementations details are agreed to by the stakeholders upfront. In the past, this practice has proved to be risky, inflexible, too time consuming and costly at the beginning of the project. When starting any new project, even when applying all possible game theory outcomes to determine the risk and estimated effort, at some point you don’t know what you don't know. The methodology of Agile helps alleviate some of these issues. Using various methodologies, the process of Agile allows for the requirements to evolve and change, and a team comprised of developers and experts from various organizational areas work together to address the tasks as they evolve. This type of a setup is typically led by a Scrum Master that leads regular checkpoint meetings with the stakeholders, helps break down the work into smaller chunks to be picked up by the developers, and sets up timelines for completion and accountability.
There are several methodologies and frameworks that you can follow to be more Agile - for example, you can use Scrum methodology, Test-Driven Development, DevOps, Continuous Integration, Continuous Delivery, Kanban, Extreme Programming, and more. The idea is to be able to provide flexibility and avoiding lock-in to a set process or tools, and have regular checkpoints with the stakeholders so that any shifts in directly can be accommodated earlier in the cycle.
The Manifesto for Agile Software Development outlines the following four values:
It also follows the following twelve principles:
The Product Owner is one of the most important stakeholders in the Agile process. Product Owner is responsible for setting the overall strategy and direction for the deliverable that is being worked. It is quite easy to miss the big picture of what is being delivered and why, because the Scrum tasks are at a very low level. The Product Owner's role is to help make sense of all the small unrelated tasks, to deliver a product that provides business value to the organization.
Keyva has offerings available to help you with your Agile journey. You can find more information here. Please contact us if you'd like to have us review your environment and provide suggestions on what might work for you. Simply drop us a line here: [email protected]
[post_title] => Agile Methodologies - Rise & Shine [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => agile-methodologies-rise-shine [to_ping] => [pinged] => [post_modified] => 2020-03-05 20:11:34 [post_modified_gmt] => 2020-03-05 20:11:34 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2119 [menu_order] => 15 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 2058 [post_author] => 7 [post_date] => 2019-12-03 17:20:12 [post_date_gmt] => 2019-12-03 17:20:12 [post_content] =>As organizations transform and containerize their business critical applications with the objective of making them ready for deployment in to various Cloud platforms, they are also need to address application security. There are several considerations to achieve varying levels of security for your cloud native app. Let's take a look at how Kong can help make this process easier:
Access Restrictions, RBAC and Traffic Restrictions
Kong API gateway can tie into your existing AD/LDAP setup and map to existing groups and users to provide role based access control (RBAC) in front of your cloud-native app. By setting up access rules in Kong, you can restrict the traffic consuming your application to specific IPs or DNS addresses. You can also easily manage and update these rules so that they can be tweaked based on the type of application and the desired functionality. For example, you can create a security profile for web-based applications that automatically allow incoming traffic on ports 80 and 8080, but block all other ports.
Throttling
If your back-end infrastructure is not yet able to handle large spikes of incoming application requests, or if you would like to maintain a set rate of incoming requests into a queue to accommodate for processing times, you can throttle the API calls to be limited to a fixed number using the Kong API gateway. Such throttling can help deter any DDoS based attacks, especially if your critical applications are exposed to the outside network.
Canary Testing and 'Promote to Production'
Kong allows you the ability to gradually and smoothly transition the workloads from your lower environments to higher environments and thereby automate the 'promote to production' process. By defining destination 'weights' for the incoming traffic, you can initially assign a higher weight to the lower environment, and as the results are hardened, you can reduce weight for your test end point and increase it for your production end point. This allows you to reduce the deployment risks and also release new patches and updates with zero downtime.
Additionally, Kong offers a number of free plugins that can be used with its community opensource edition. By leveraging a base Kong API gateway with the Bot Detection plugin, you can protect your application service from most common attack bots, and also allows you to blacklist or whitelist specific traffic sources.
Keyva helps fortune 500 organizations evaluate their existing business application portfolios and transform their applications to cloud-native architectures. Keyva can help you deliver new agile technical capabilities and drive adoption. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at [email protected]
Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.
During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.
Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/
[post_title] => Secure your business critical apps with Kong [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => secure-your-business-critical-apps-with-kong [to_ping] => [pinged] => [post_modified] => 2020-01-08 18:44:58 [post_modified_gmt] => 2020-01-08 18:44:58 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2058 [menu_order] => 18 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 2038 [post_author] => 7 [post_date] => 2019-11-21 05:38:10 [post_date_gmt] => 2019-11-21 05:38:10 [post_content] =>If you have an IT environment with hundreds or thousands of servers running a multiple OSes, you know the operational challenges of patching and maintenance all too well. Many organizations have operational team members dedicated to patching systems on a weekly basis. If this process is being carried out manually, it requires a lot of time from an engineer to go through the systems individually. The engineer needs to track the current patch level and deploy the latest patches that make such systems compliant. Then that engineer needs to run integration tests to make sure nothing was adversely affected. All OS types have different methodologies for patching, and this work is mostly targeted for off-hours when the systems are more likely to be idle.
There are many different levels of automation that can be applied to OS patching. At the lowest level of automation, you can use configuration management systems to make sure that all devices under management are at a consistent level of software patching. This process can be triggered manually via the configuration management console or can be scheduled from within the tool. You can turn this into a more advanced automated use case by integrating with your ticketing system. An example: a Change ticket is scheduled for patching the targeted systems, it goes through the necessary team approvals, and automated configuration management is triggered once the approved change window is reached. Notifications will then be sent out to the operations team upon success or failure of the automation. In one of our customer studies the operations team saved over 800 hours (20 FTE weeks) per year by automating patching for 1500 of their Windows and Unix servers. The more devices or OSes in your environment, the more you have to gain from thorough automation.
Keyva helps fortune 100 organizations assess their existing patch management processes and compare their "As Is" state to their recommended "To Be" state. By helping implement the recommendations around people, process, and tool changes, we've saved hundreds of hours of manual work for these organizations. This in turn has helped make their Operations teams more efficient and able to provide more direct value-add to their core business rather than spend time on repetitive tasks. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at [email protected]
Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.
During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.
Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/
[post_title] => Patch Management Automation [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => patch-management-automation [to_ping] => [pinged] => [post_modified] => 2020-03-05 19:52:02 [post_modified_gmt] => 2020-03-05 19:52:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2038 [menu_order] => 19 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 2033 [post_author] => 7 [post_date] => 2019-11-19 19:11:01 [post_date_gmt] => 2019-11-19 19:11:01 [post_content] =>We talked about how Kong can help you get set up with API abstraction here, and we also talked about Composable Infrastructures here. Today we will discuss how you can combine both these ideas to enable the delivery of Automation-as-a-Service by setting up Composable Automation with Kong.
Most organizations that are developing a lot of IT automation usually do so using multiple best-of-breed automation tools. We will consider Red Hat Ansible and Chef for the example today. Now, both these tools require a very different set of skills to administer and operate. Ansible is YAML and Python-based, while Chef requires skills with Ruby scripting. But given that an organization has already invested a lot of time and effort in developing automation for these different products, how do we maximize the value out of those existing modules? Would we need another overarching tool that can orchestrate them? Or do we simply replace one with the other? But given they both require different skills, how do we retrain the various teams that use the tool being replaced? All these questions are valid, and require a deeper assessment of the targeted end goal.
One of the options to resolve this quandary is to create Composable Automation using the tools you already have, and add a layer of Kong API abstraction to it. In other words, Red Hat Ansible and the Chef teams would create a deployable package that has all the platform prerequisites for a team to start creating playbooks, and make it available via a code repository. This way, the end users using these automation platforms can keep working with the tool of their choice, and developing automation leveraging their existing skillsets – without having to worry about administering those platforms or learning a new programming language. Adding a layer of Kong abstraction can help genericize the automation being called. For example, let's say that there is automation built into both products for provisioning a VM, and given that both automation products have specific REST API calls to trigger this specific function, the API abstraction layer can redirect the "provision a VM" call to the appropriate automation tool on the backend depending upon the requested parameters. This also helps from the business value perspective – as you can now replace or retire the automation platform you don't need gradually and without affecting the calling service.
Keyva can help you assess your application readiness and adopt Agile methodologies. Through leveraging Infrastructure-As-Code deployments and implementing Composable Infrastructures you can rapidly modernize your applications. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at [email protected]
Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.
During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.
Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/
[post_title] => Composable Automation with Kong [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => composable-automation-with-kong [to_ping] => [pinged] => [post_modified] => 2020-03-05 19:46:02 [post_modified_gmt] => 2020-03-05 19:46:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2033 [menu_order] => 20 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 2030 [post_author] => 7 [post_date] => 2019-11-15 17:10:16 [post_date_gmt] => 2019-11-15 17:10:16 [post_content] =>As organizations adopt the Agile framework, there is also a need for the Infrastructure teams to keep pace with providing the basic building blocks for application deployment in fast and cost-effect manner – this includes the underlying compute, network, storage resources, the DevOps pipeline, as well as the applications that flow through that pipeline. Enter composable infrastructure as a solution to meet these needs. However, there is usually confusion among teams as to who owns which parts of the deployment. Do the application teams own the entire stack? Which portions do the infrastructure teams own? Who owns the deployment of the OS layer? What about all of the other components? Composable infrastructure can help alleviate some of that confusion.
What is composable infrastructure? The idea is to make the basic building blocks of required infrastructure available in a fluid way, think software-driven datacenter, such that it can be stood up at any time, by any consumer, and with a consistent configuration every single time. This can help you create a more agile and cost-effective end product offering, and take huge leaps in to providing IT-as-a-Service. There are also profound business benefits derived from composable infrastructure because of efficiencies gained via modularization of the IT stack.
Let's take a deeper look. One of the common questions we get asked by our customers is "Who owns IT automation?" Is it the Infrastructure team that owns the Software, or the Operations team, or the Applications team that wants to automate deployments for their CI/CD Pipeline? There is no standard answer to this. But most organizations that are on the upper end of the IT automation maturity curve respond by creating Composable Infrastructure for the Automation tools in question.
The Infrastructure team would own the development of the packaged offering (e.g. Red Hat Ansible and Tower configurations patched and packaged to the latest supported versions made available via source control repository), and the respective Application teams would own the development of IT automation unique for their requirements (e.g. writing Ansible playbooks that automate their tasks, without worrying about the maintenance of the underlying platform).
With composable infrastructure, the Infrastructure and Operations team would also jointly be responsible for releasing new versions of package and deploying them throughout the environment. This way, any team that needs to work on Automation need not worry about learning skills around administration or maintenance for the underlying software, but rather focus on writing automation leveraging pre-built modules. This same concept can also be easily extended to containerized applications.
Keyva has helped multiple organizations assess their application readiness and deliver application modernization, leverage Agile framework, and create Infrastructure-As-Code deployments by implementing composable infrastructures. If you'd like to have us review your environment and provide suggestions on what might work for you, please contact us at [email protected]
Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.
During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.
Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/
[post_title] => Composable Infrastructures [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => composable-infrastructures [to_ping] => [pinged] => [post_modified] => 2020-01-22 18:09:12 [post_modified_gmt] => 2020-01-22 18:09:12 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2030 [menu_order] => 21 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 1999 [post_author] => 7 [post_date] => 2019-11-11 18:46:49 [post_date_gmt] => 2019-11-11 18:46:49 [post_content] =>In our most recent post we talked about how to set up Kong in your environment. We will now take a look at how to set up an API abstraction service in Kong, so you can route your requests to backend fulfillment APIs.
In the example we look at today, we will set up a generic call for "Get Incident Ticket" and have it translated via Kong to a backend ServiceNow API call for ServiceNow Incident Management. You can use this example to set up similar API call translations to any microservice for any custom or commercial application.
Step 1 – Check to make sure the Kong service is up and running
kong health
kong start
curl -i
http://<kong_FQDN_or_IP>:8001/
Step 2 – Set up a Service in Kong for the ServiceNow Incident API
curl -i -X POST --url http://
<kong_FQDN_or_IP>
:8001/services --data 'name=servicenow-sample-get-incident' --data 'url=https://<servername>.service-now.com/api/now/table/incident?sysparam_limit=1'
Step 3 – Create a route service for the API endpoint
curl -i -X POST --url http://
<kong_FQDN_or_IP>
:8001/services/servicenow-sample-get-incident/routes --data 'hosts[]=itsm-server' --data 'paths[]=/get-incident' --data 'methods[]=GET'
Step 4 – Test the API routing call. Note that the API translation happens on port 8000 by default. You will also provide the username and password (if needed) for the translated service. In our case, we will be passing basic authentication credentials for ServiceNow.
curl -i -X GET --url http://localhost:8000/get-incident --header "Host: itsm-server" -u username:password
The output would return the JSON formatted response from ServiceNow, and would look similar to below:
{"result":[{"parent":"","made_sla":"true","caused_by":"","watch_list":"","upon_reject":"cancel","sys_updated_on":"2019-09-05 11:30:23","child_incidents":"0","hold_reason":"","approval_history":"","number":"INC0010001","resolved_by":"","sys_updated_by":"admin","opened_by":{"link":"https://itsm-server/api/now/table/sys_user/6816f79cc0a8016401c5a33be04be441","value":"6816f79cc0a8016401c5a33be04be441"},"user_input":"","sys_created_on":"2019-09-05 11:30:16","sys_domain":{"link":"https://itsm-server/api/now/table/sys_user_group/global","value":"global"},"state":"2","sys_created_by":"admin","knowledge":"false","order":"","calendar_stc":"","closed_at":"","cmdb_ci":"","delivery_plan":"","contract":"","impact":"3","active":"true","work_notes_list":"","business_service":"","priority":"5","sys_domain_path":"/","rfc":"","time_worked":"","expected_start":"","opened_at":"2019-09-05 11:30:16","business_duration":"","group_list":"","work_end":"","caller_id":"","reopened_time":"","resolved_at":"","approval_set":"","subcategory":"","work_notes":"","short_description":"keyva_snow_test","close_code":"","correlation_display":"","delivery_task":"","work_start":"","assignment_group":"","additional_assignee_list":"","business_stc":"","description":"keyva snow test description","calendar_duration":"","close_notes":"","notify":"1","service_offering":"","sys_class_name":"incident","closed_by":"","follow_up":"","parent_incident":"","sys_id":"c1341204dbf3b70045a1f26039961932","contact_type":"","reopened_by":"","incident_state":"2","urgency":"3","problem_id":"","company":"","reassignment_count":"0","activity_due":"","assigned_to":{"link":"https://itsm-server/api/now/table/sys_user/6816f79cc0a8016401c5a33be04be441","value":"6816f79cc0a8016401c5a33be04be441"},"severity":"3","comments":"","approval":"not requested","sla_due":"","comments_and_work_notes":"","due_date":"","sys_mod_count":"1","reopen_count":"0","sys_tags":"","escalation":"0","upon_approval":"proceed","correlation_id":"","location":"","category":"inquiry"}]}
This quick walk-through showed you how you can easily create an API abstraction layer using Kong for specific back-end fulfillment calls. You can create similar calls for any level of infrastructure and application APIs and build capabilities towards an Infrastructure-as-Code implementation.
Keyva helps organizations implement API abstraction and leverage it to deliver Infrastructure-as-Code. The team at Keyva has years of experience with Kong and other API abstraction tools. We also offer lunch-and-learn sessions for discussions around how other organizations are using these technologies and what use cases would work best for your organization. Please contact us if you're interested in discussing API abstraction and how it can work for you.
Anuj joined Keyva from Tech Data where he was the Director of Automation Solutions. In this role, he specializes in developing and delivering vendor-agnostic solutions that avoid the “rip-and-replace” of existing IT investments. Tuli has worked on Cloud Automation, DevOps, Cloud Readiness Assessments and Migrations projects for healthcare, banking, ISP, telecommunications, government and other sectors.
During his previous years at Avnet, Seamless Technologies, and other organizations, he held multiple roles in the Cloud and Automation areas. Most recently, he led the development and management of Cloud Automation IP (intellectual property) and related professional services. He holds certifications for AWS, VMware, HPE, BMC and ITIL, and offers a hands-on perspective on these technologies.
Like what you read? Follow Anuj on LinkedIn at: https://www.linkedin.com/in/anujtuli/
Join the Keyva Community! Follow Keyva on LinkedIn at:Kong Enterprise provides you the ability to rate limit the traffic for various objects using the Rate Limiting Advanced Plugin. In the example below, we will rate limit a service fronted by Kong Enterprise.
We will use our existing Kong Enterprise on RHEL 7 environment. The installation process for this environment is documented here.
First lets make sure we have an existing service we can use. If your environment needs to have a service created, you can also check out our blog on how to do so here.
We will also be using the RBAC controls and the user we set up in our blog post. If you have not yet setup RBAC you can learn how to do so here.
1) Create a service that we can use for this example
Log in to the Kong portal at https://<kong_FQDN_or_IP>:8445 and navigate to your chosen Workspace -> Services -> New Service
Fill in the fields for Service Name, Host, Path, Port and other fields as necessary
You can also run the step of creating a Service via the command line in the format below:
curl -i -X POST --url http://<kong_FQDN_or_IP>:8001/services --data 'name=DemoService' --data 'url=myurl.com'
Check to make sure the Service was created successfully by navigating through the console
Or running the following command line:
curl -i -X GET --url "http://<kong_FQDN_or_IP>:8001/services" --header "Kong-Admin-Token: rbac_user_token_1"
2) Next we will add a route for this service
curl -i -X POST --url "http://<kong_FQDN_or_IP>:8001/services/DemoService/routes" --data "hosts[]=mydemoexample.com" --header "Kong-Admin-Token: rbac_user_token_1"
3) Use the rate limiting plugin with our defined service
curl -i -X POST --url "http://<kong_FQDN_or_IP>:8001/services/DemoService/plugins" --data "name=rate-limiting-advanced" --data "config.sync_rate=0" --data "config.window_size=60" --data "config.limit=2" --header "Kong-Admin-Token: rbac_user_token_1"
This configuration means that the DemoService service should not be allowed to process more than 2 requests per 60 seconds period.
4) Now we will test running more than 2 requests against the DemoService service.
After running the request below more than twice
curl -i -X GET --url "http://<kong_FQDN_or_IP>:8000/" --header "Host: mydemoexample.com" --header "Kong-Admin-Token: rbac_user_token_1"
We get the following message:
HTTP/1.1 429 Too Many Requests
By controlling the volume of requests to a specific service, and by adding RBAC controls in front of it, you can secure a quasi-firewall for east-west traffic against internal networking vulnerabilities.
If you have any questions or comments on the tutorial content above, or run in to specific errors not covered here, please feel free to reach out to [email protected]
[post_title] => Kong Enterprise - How to Setup the Rate Limiting Advanced Plugin [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => kong-enterprise-how-to-setup-the-rate-limiting-advanced-plugin [to_ping] => [pinged] => [post_modified] => 2022-01-26 13:18:35 [post_modified_gmt] => 2022-01-26 13:18:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=2149 [menu_order] => 11 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 126 [max_num_pages] => 16 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 1186af898277f60ca6f9dd3410454f24 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )