Skip to content

Releases: cloudposse/terraform-aws-eks-cluster

v2.2.0 KMS key for logs, timeout on wait for cluster

20 May 18:43
dc373e0
Compare
Choose a tag to compare

PR #150

  • Allow user to specify KMS Key to use to encrypt Cloudwatch logs, closes #152
  • Add timeout to default wait_for_cluster_command, supersedes #145, closes #146
  • Additional checks for valid EKS endpoint, fixes #143, fixes #144
  • Change all references to git.io/build-harness into cloudposse.tools/build-harness, since git.io redirects will stop working on April 29th, 2022.
  • Update migration docs to refer to v1 and v2 as we switch to production SemVer

v2.1.0 Output cloudwatch log group name

20 May 18:59
fbabc25
Compare
Choose a tag to compare

This release is identical to v0.46.0 and is just a renumbering using production semantic version rules.

Output cloudwatch log group name @woz5999 (#149)

what

  • Output cloudwatch log group name

why

  • This is helpful for passing in the log group name to other resources, e.g. datadog log forwarder

v0.46.0 Output cloudwatch log group name

27 Apr 20:59
fbabc25
Compare
Choose a tag to compare

This release has been renumbered as version 2.1.0

Output cloudwatch log group name @woz5999 (#149)

what

  • Output cloudwatch log group name

why

  • This is helpful for passing in the log group name to other resources, e.g. datadog log forwarder

v2.0.0 use new security-group module

20 May 18:54
b745ed1
Compare
Choose a tag to compare

This release is identical to version 0.45.0 and is just a renumbering to provide production-level semantic versioning. No migration is needed from v0.45.0 or later.

Version 2.0 includes updates to use our new security group module, which is a breaking change. See the V1 to V2 migration documentation for details on how to safely migrate.

v0.45.0

08 Jan 18:26
b745ed1
Compare
Choose a tag to compare
Update Security Group @aknysh (#141)

what

  • Update Security Group

why

  • This module creates an EKS cluster, which automatically creates an EKS-managed Security Group in which all managed nodes are placed automatically by EKS, and unmanaged nodes could be placed
    by the user, to ensure the nodes and control plane can communicate.

  • Before version 0.45.0, this module, by default, created an additional Security Group. Prior to version 0.19.0 of this module, that additional Security Group was the only one exposed by
    this module (because EKS at the time did not create the managed Security Group for the cluster), and it was intended that all worker nodes (managed and unmanaged) be placed in this
    additional Security Group. With version 0.19.0, this module exposed the managed Security Group created by the EKS cluster, in which all managed node groups are placed by default. We now
    recommend placing non-managed node groups in the EKS-created Security Group as well by using the allowed_security_group_ids variable, and not create an additional Security Group.

references

related

v0.44.1

29 Dec 16:16
e738650
Compare
Choose a tag to compare
v0.44.1 Pre-release
Pre-release

🚀 Enhancements

Update to use the Security Group module @aknysh (#138)

what

  • Update to use the Security Group module
  • Add migration doc
  • Update README and GitHub workflows

why

v1.0.0 Initial release with production Semantic Versioning

20 May 18:49
4f0dc08
Compare
Choose a tag to compare

This 1.0.0 release is identical to v0.44.0 and is simply a conversion to production Semantic Versioning. If you are already using a later pre-1.0 version, do not migrate to this version, migrate directly to v2.0.0 or later/

This is the first (oldest code) release with production Semantic Versioning, part of Cloud Posse's general policy to convert to production versioning as we make updates to relatively mature modules, especially those where we see breaking changes coming in the near future. This module already has a Version 2.0 with breaking changes.

v0.44.0

06 Dec 21:54
4f0dc08
Compare
Choose a tag to compare

🚀 Enhancements

Add `service_ipv4_cidr` option (#137)

what

  • Hide KUBECONFIG when not in use
  • Combine service role IAM policies into single managed policy
  • Add service_ipv4_cidr option

why

  • When not intending to use KUBECONFIG, values from it were being used anyway, causing problems, particularly for "exec auth".
  • Fixes #135 (introduced in #132). Supersedes and closes #136.
  • Closes #130

🐛 Bug Fixes

Fix bug introduced in 0.43.3. Hide KUBECONFIG when not in use @Nuru (#137)

what

  • Hide KUBECONFIG when not in use
  • Combine service role IAM policies into single managed policy
  • Add service_ipv4_cidr option

why

  • When not intending to use KUBECONFIG, values from it were being used anyway, causing problems, particularly for "exec auth".
  • Fixes #135 (introduced in #132). Supersedes and closes #136.
  • Closes #130

v0.43.4

16 Nov 19:07
b5fe6a9
Compare
Choose a tag to compare
v0.43.4 Pre-release
Pre-release

Note: This release has a known issue introduced in 0.43.3. Upgrade to 0.44.0 or roll back to 0.43.2.

🚀 Enhancements

Remove unneeded `template` provider, close #133 @Nuru (#134)

what

  • Remove unneeded template provider

why

  • template provider deprecated

references

v0.43.3

06 Nov 00:05
5439d06
Compare
Choose a tag to compare
v0.43.3 Pre-release
Pre-release

Note: This release introduced a bug in setting the IAM policy for the service role. Update to 0.44.0 or roll back to 0.43.2.

🚀 Enhancements

Prevent creating log group by the iam role @nitrocode (#132)

what

  • Prevent creating log group by the iam role

why

See: hashicorp/terraform#14750, terraform-aws-modules/terraform-aws-eks#920

This is happening because EKS Cluster gets destroyed after Terraform delete the Cloudwatch Log Group. The AmazonEKSServicePolicy IAM policy (that is assigned to EKS Cluster role by default within this module) has permissions to CreateLogGroup and anything else needed to continue to logging correctly. When the Terraform destroys the Cloudwatch Log Group, the EKS Cluster that is running create it again. Then, when you run Terraform Apply again, the Cloudwatch Log Group doesn't exist in your state anymore (because the Terraform actually destroyed it) and the Terraform doesn't know this resource created outside him. terraform-aws-modules/terraform-aws-eks/issues/920

references