kubernetes/cluster/addons
Kubernetes Submit Queue 7c18e035ff
Merge pull request #68051 from Szetty/master
Automatic merge from submit-queue (batch tested with PRs 68051, 68130, 67211, 68065, 68117). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.

Put fluentd back to host network

In the future we will want to monitor each system component that is deployed as a DaemonSet using only one instance of prometheus-to-sd (which will be deployed as a DaemonSet too), but for this we need all the system components to be part of host network. There is no port colision created with this change.
```release-note
Port 31337 will be used by fluentd
```
2018-08-31 15:32:34 -07:00
..
addon-manager Update all script to use /usr/bin/env bash in shebang 2018-04-19 13:20:13 +02:00
calico-policy-controller A few cleanups (remove duplicated env vars & unnecessary comments) on yaml files. 2018-06-12 10:53:54 -07:00
cluster-loadbalancing Use default seccomp profile for addons 2018-05-02 22:50:29 -07:00
cluster-monitoring Merge pull request #66185 from vantuvt/vantuvt-patch-3 2018-08-15 02:09:08 -07:00
dashboard Merge pull request #64228 from tallclair/dashboard-owners 2018-06-19 18:45:46 -07:00
device-plugins/nvidia-gpu Update nvidia-gpu-device-plugin to apps/v1 and use RollingUpdate updateStrategy. 2018-05-24 17:53:13 -07:00
dns prep for 1.12 2018-08-30 11:43:36 -04:00
dns-horizontal-autoscaler Bump cluster-proportional-autoscaler to 1.2.0 2018-07-11 18:01:15 -07:00
fluentd-elasticsearch fix the fluentd config params 2018-08-28 17:26:27 +04:00
fluentd-gcp Put fluentd back to host network 2018-08-30 10:44:04 +02:00
ip-masq-agent Bump ip-masq-agent to v2.1.1 2018-08-27 16:30:04 -07:00
kube-proxy Set pod priority on kube-proxy by default 2018-02-20 20:39:48 -08:00
metadata-agent A large set of improvements to the Stackdriver components. 2018-08-06 11:26:35 -04:00
metadata-proxy Bump to k8s.gcr.io/metadata-proxy:v0.1.10 2018-07-25 15:32:30 -07:00
metrics-server Update metrics-server to v0.3.0 2018-08-30 12:10:09 -04:00
node-problem-detector exec away the shell for node-problem-detector 2018-03-09 16:07:30 -08:00
prometheus [prometheus addon] Fix missing storage class in alertmanager PVC 2018-04-30 12:30:20 +02:00
python-image remove gcloud docker -- since it's deprecated 2018-02-28 00:24:27 -08:00
rbac Limit access to configmaps 2018-06-08 18:02:37 +02:00
storage-class set standard storage class addon mode to "ensure-exists" 2018-07-16 18:16:25 +08:00
BUILD Use the pkg_tar wrapper from kubernetes/repo-infra 2018-01-18 17:10:16 -08:00
README.md Updated cluster/addons readme to match and point to docs 2017-10-18 10:36:24 -04:00

Legacy Cluster add-ons

For more information on add-ons see the documentation.

Overview

Cluster add-ons are resources like Services and Deployments (with pods) that are shipped with the Kubernetes binaries and are considered an inherent part of the Kubernetes clusters.

There are currently two classes of add-ons:

  • Add-ons that will be reconciled.
  • Add-ons that will be created if they don't exist.

More details could be found in addon-manager/README.md.

Cooperating Horizontal / Vertical Auto-Scaling with "reconcile class addons"

"Reconcile" class addons will be periodically reconciled to the original state given by the initial config. In order to make Horizontal / Vertical Auto-scaling functional, the related fields in config should be left unset. More specifically, leave replicas in ReplicationController / Deployment / ReplicaSet unset for Horizontal Scaling, leave resources for container unset for Vertical Scaling. The periodic reconcile won't clobbered these fields, hence they could be managed by Horizontal / Vertical Auto-scaler.

Add-on naming

The suggested naming for most of the resources is <basename> (with no version number). Though resources like Pod, ReplicationController and DaemonSet are exceptional. It would be hard to update Pod because many fields in Pod are immutable. For ReplicationController and DaemonSet, in-place update may not trigger the underlying pods to be re-created. You probably need to change their names during update to trigger a complete deletion and creation.

Analytics