profile
viewpoint
Liam White liamawhite @tetrateio Seattle, WA USA www.linkedin.com/in/liam-white Software Engineer @tetrateio. @istio Maintainer.

liamawhite/licenser 5

Verify and append licenses to your GitHub repositories.

liamawhite/istio-dev-framework 2

Live debug Istio code in a Kubernetes cluster.

liamawhite/amalgam8 0

Content and Version-based Routing Fabric for Polyglot Microservices

liamawhite/api 0

API, config and standard vocabulary definitions for the Istio project

liamawhite/cgm-remote-monitor 0

nightscout web monitor

liamawhite/chartmuseum 0

Helm Chart Repository with support for Amazon S3 and Google Cloud Storage

liamawhite/docker.github.io 0

Source repo for Docker's Documentation

Pull request review commentistio/api

Mesh-wide minimum tls version

 message ProxyConfig {   // All control planes running in the same service mesh should specify the same mesh ID.   // Mesh ID is used to label telemetry reports for cases where telemetry from multiple meshes is mixed together.   string mesh_id = 30;++  // TLS protocol versions.+  enum TLSProtocol {+    // Automatically choose the optimal TLS version.+    TLS_AUTO = 0;++    // TLS version 1.0+    TLSV1_0 = 1;++    // TLS version 1.1+    TLSV1_1 = 2;++    // TLS version 1.2+    TLSV1_2 = 3;++    // TLS version 1.3+    TLSV1_3 = 4;+  }++  // Default minimum TLS protocol version. This is overridden by a minimum protocol version set+  // in a more specific resource (e.g. Gateways).+  TLSProtocol min_protocol_version = 31;

This is a good question. The user requirement is only for server as client-side auto already prevents 1.0 but I will defer to whichever one people want to do.

liamawhite

comment created time in 10 hours

Pull request review commentistio/api

Mesh-wide minimum tls version

 message ProxyConfig {   // All control planes running in the same service mesh should specify the same mesh ID.   // Mesh ID is used to label telemetry reports for cases where telemetry from multiple meshes is mixed together.   string mesh_id = 30;++  // TLS protocol versions.+  enum TLSProtocol {+    // Automatically choose the optimal TLS version.+    TLS_AUTO = 0;++    // TLS version 1.0+    TLSV1_0 = 1;++    // TLS version 1.1+    TLSV1_1 = 2;++    // TLS version 1.2+    TLSV1_2 = 3;++    // TLS version 1.3+    TLSV1_3 = 4;+  }++  // Default minimum TLS protocol version. This is overridden by a minimum protocol version set

@nrjpoddar were you looking for something more detailed than this?

liamawhite

comment created time in 13 hours

Pull request review commentistio/api

Mesh-wide minimum tls version

 message ProxyConfig {   // All control planes running in the same service mesh should specify the same mesh ID.   // Mesh ID is used to label telemetry reports for cases where telemetry from multiple meshes is mixed together.   string mesh_id = 30;++  // TLS protocol versions.

I copied and pasted from Gateways here because I couldn't see an obvious way to structure that one for re-use.

liamawhite

comment created time in 13 hours

PR opened istio/api

Reviewers
Mesh-wide minimum tls version do-not-merge/hold

As discussed on slack. It's obviously too late for 1.7.0 but would anyone be against this being cherry-picked into 1.7.1?

Signed-off-by: Liam White liam@tetrate.io

+371 -140

0 comment

5 changed files

pr created time in 13 hours

create barnchliamawhite/api

branch : min-tls-version-sidecar

created branch time in 13 hours

push eventliamawhite/api

Louis Ryan

commit sha b524b1eb292237dfe01889b8049dca10310999db

Replace 'scope' with 'export_to' namespace (#758) * Replace public/private scoping with namespace scoped exports Add flags to control scopeTo defaults Update doc for locality weighted LB * Hide from docs and other misc fixes

view details

Shriram Rajagopalan

commit sha 3c7e31a64853dade01bfd5019bfa0f983a073db4

Enabling SDS in the gateway (#778) * Enabling SDS in the gateway Signed-off-by: Shriram Rajagopalan <shriramr@vmware.com> * lint Signed-off-by: Shriram Rajagopalan <shriramr@vmware.com>

view details

Shriram Rajagopalan

commit sha 5c6aec28ebf756d66e3b6c44208c79d194ce8cdb

Revert "Enabling SDS in the gateway (#778)" (#779) This reverts commit 3c7e31a64853dade01bfd5019bfa0f983a073db4.

view details

Douglas Reid

commit sha 1b0a0346319495e50178c2e62addefa845056365

Add way to signal encoding used for CompressedAttributes to Mixer (#770) * Add mechanism to signal encoding used for CompressedAttributes to mixer proto * Update proto.lock

view details

Shriram Rajagopalan

commit sha d5da499b61ddc6a85248edee2f214a2104364953

revert sds name (#781) Signed-off-by: Shriram Rajagopalan <shriramr@vmware.com>

view details

Shriram Rajagopalan

commit sha e3015e7a46e52cbc44c8ed60bde57318eb7a2dd9

Fixing SDS field/semantics in the gateway (#780) * Enabling SDS in the gateway Signed-off-by: Shriram Rajagopalan <shriramr@vmware.com> * lint Signed-off-by: Shriram Rajagopalan <shriramr@vmware.com> * nits Signed-off-by: Shriram Rajagopalan <shriramr@vmware.com> * cleanups Signed-off-by: Shriram Rajagopalan <shriramr@vmware.com> * update Signed-off-by: Shriram Rajagopalan <shriramr@vmware.com> * updates Signed-off-by: Shriram Rajagopalan <shriramr@vmware.com> * protolock Signed-off-by: Shriram Rajagopalan <shriramr@vmware.com>

view details

Andra Cismaru

commit sha 3094619c84733caef53723bfc96fa63ceb58cd57

Add subject_alt_names field in ServiceEntry (#785) * Add service_accounts field in ServiceEntry * Ran make proto-commit * Added example with format * Rename to subject_alt_names * Move example out of the message definition * Added a period * Remove hide_from_docs

view details

Jimmy Chen

commit sha 1a129f07e6f27235eebb70bab75f2cd6cffdddf5

Update comment for credentialName (#786) * Update comment for credentialName * Update comment

view details

Martin Taillefer

commit sha 9883956e6ed2269123e1f488ebbe2e7e248de0bd

Doc fixes (#788)

view details

Joshua Blatt

commit sha 07829e06cab1186907a170f7478675b5b93457fc

Add transport error retry config to mixer client. (#792)

view details

Limin Wang

commit sha 27010bf6b4f37f5252561e544e99c1f910e5110b

Rename "principals" to "names". (#791) * Rename "principals" to "names". Since this is defined under "subjects", we are basically referring to the "name" of a subject. * Update comments.

view details

Martin Taillefer

commit sha 92b7ddc0f30b3aab6a5e82a861e54bf55fe249bd

Doc fix to have the mesh config show up on istio.io. (#794)

view details

John Howard

commit sha 01a2afd81a4abcea3f06765f515293c9b05b8a3c

Fix typos in sidecar.proto (#795)

view details

John Howard

commit sha d817a1a3e29a0687920589181aec48b5a39daabb

Fix typos in sidecar.proto (1.1) (#796)

view details

Pengyuan Bian

commit sha 5945a02236f53ad860d518772f730594709b1234

add server_name to mixer remote handler tls / mtls (#789) * add server_name to mixer remote handler tls / mtls * proto.lock

view details

Caleb Gilmour

commit sha 2b2fabd451530ae28003830c52f0ca43ca63be14

Generate files with correct owner. (#798) Signed-off-by: Caleb Gilmour <caleb.gilmour@datadoghq.com>

view details

Caleb Gilmour

commit sha f6b6c4168da150dc23073c9f0ec55fc2b1924690

Add Datadog tracing to proxy config (#797) Signed-off-by: Caleb Gilmour <caleb.gilmour@datadoghq.com>

view details

Shriram Rajagopalan

commit sha 1b39429492ff584547a70b6afa64dd38939e4777

doc fixes (#801) Signed-off-by: Shriram Rajagopalan <shriramr@vmware.com>

view details

louiscryan

commit sha 823a224f0bb81891772af1cd43a97468098b7b24

Merge branch 'release-1.1' into Merge11ToMaster

view details

Louis Ryan

commit sha 53b11a3dc9ff646d30dfd23785d0b99b11b221ef

Merge pull request #806 from louiscryan/Merge11ToMaster Merge 1.1 to master

view details

push time in 14 hours

pull request commentistio/istio

Quantity Istio operator tests and fix

@howardjohn tests should now pass can you approve?

liamawhite

comment created time in 2 days

pull request commentistio/istio

Quantity Istio operator tests and fix

/test release-notes_istio

liamawhite

comment created time in 2 days

push eventliamawhite/istio

Liam White

commit sha b26b74aa875bee5ccdd067c4d5a79a2d61f6f35e

fix linting Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 2 days

push eventliamawhite/istio

Rama Chavali

commit sha 85f0c5c444e75076f8485489d5d7eb8204d5d6b8

minor ads renaming (#25537) Signed-off-by: Rama Chavali <rama.rao@salesforce.com>

view details

Rama Chavali

commit sha 13e83df7707f5ebda16b87129f4e3546fb06a338

disable enforcing successrate based outlier detection (#25534) Signed-off-by: Rama Chavali <rama.rao@salesforce.com>

view details

Istio Automation

commit sha 4c14c4673ab8748a0c77922290d91b6ce68b7b7c

Automator: update common-files@master in istio/istio@master (#25540)

view details

Rama Chavali

commit sha b330be769cdb93375ac73ba127cc0b8d82ced4c8

refactor pod lookup code (#25524) * refactor pod lookup code Signed-off-by: Rama Chavali <rama.rao@salesforce.com> * lint fixes Signed-off-by: Rama Chavali <rama.rao@salesforce.com>

view details

Dozer

commit sha ec9166a1e46945c6e301b1165c7ec1cc61b0aeb6

optimize memory usage (#25531) (#25532)

view details

Taylor Barrella

commit sha 25d5d0c1be92060466dc69bd0cb51b3101294d4c

Use MARKDOWN_LINT_ALLOWLIST instead of _WHITELIST (#25545) Following https://github.com/istio/common-files/pull/276

view details

Istio Automation

commit sha ceb979df1c1fdd5495aafe63f92c4816424f8a33

Automator: update common-files@master in istio/istio@master (#25547)

view details

Justin Wei

commit sha ea1acc09b6a50e2b9fd6953fa2b5447e6f3e5d1d

Azure Platform Support (#24995) * azure rough template * azure interface with API version update * fixed lints * moved API version update inside azureEnv * added extra metadata, ignore empty fields * make metadata parsers azureEnv recievers * added azure_ prefix to tags Co-authored-by: Justin Wei <juswei@google.com>

view details

Ed Snible

commit sha 9058460543fa2b6e6e48c42a20aee280e85357f7

XDS-based replacement for /debug/syncz (#25344) * Experimental XDS-based replacement for /debug/syncz * istioctl client side implementation for XDS istio.io/debug/syncz * Handle 'x proxy-status <pod>' * Early out 'proxy-status <pod>', pass proxyID as resource name * Initial error handling * Logging and output error handling * Clean-up

view details

John Howard

commit sha adeffcf1db154d207bbcd5fa6a13019702f64721

Optimize DR lookup in EDS (#25518) * Improve EDS benchmark * Optimize DR lookup in EDS ``` name old time/op new time/op delta EndpointGeneration/1/100-8 257µs ±16% 157µs ±11% -39.00% (p=0.008 n=5+5) EndpointGeneration/10/10-8 100µs ± 9% 102µs ±14% ~ (p=0.841 n=5+5) EndpointGeneration/100/10-8 590µs ±13% 626µs ±11% ~ (p=0.310 n=5+5) EndpointGeneration/1000/1-8 869µs ± 8% 940µs ± 7% ~ (p=0.095 n=5+5) name old alloc/op new alloc/op delta EndpointGeneration/1/100-8 75.1kB ± 0% 75.1kB ± 0% ~ (p=0.357 n=5+5) EndpointGeneration/10/10-8 22.2kB ± 0% 22.2kB ± 0% ~ (p=0.095 n=5+5) EndpointGeneration/100/10-8 41.9kB ± 0% 42.0kB ± 0% ~ (p=0.730 n=5+5) EndpointGeneration/1000/1-8 37.7kB ± 3% 37.9kB ± 1% ~ (p=0.690 n=5+5) name old allocs/op new allocs/op delta EndpointGeneration/1/100-8 1.30k ± 0% 1.30k ± 0% ~ (all equal) EndpointGeneration/10/10-8 395 ± 0% 395 ± 0% ~ (all equal) EndpointGeneration/100/10-8 401 ± 1% 401 ± 0% ~ (p=0.730 n=5+5) EndpointGeneration/1000/1-8 62.0 ± 0% 62.0 ± 0% ~ (all equal) ``` * fix tests * fix lint * fix test * Add release note * wrong pr * refactor * fix nil

view details

John Howard

commit sha eb4178a79ad31dd63e901c2902af65354afc2e26

Remove inaccurate and non-performant EDS metrics (#25519) * Remove inaccurate and non-performant EDS metrics ``` name old time/op new time/op delta EndpointGeneration/100/1-8 94.3µs ± 7% 89.7µs ±17% ~ (p=0.635 n=3+3) EndpointGeneration/1000/1-8 855µs ± 2% 822µs ± 3% ~ (p=0.125 n=3+3) EndpointGeneration/10000/1-8 7.62ms ± 6% 7.14ms ± 3% ~ (p=0.162 n=3+3) EndpointGeneration/100/100-8 714µs ± 3% 665µs ± 7% ~ (p=0.156 n=3+3) EndpointGeneration/1000/100-8 5.17ms ± 7% 6.45ms ±20% ~ (p=0.193 n=3+3) name old alloc/op new alloc/op delta EndpointGeneration/100/1-8 6.59kB ± 0% 4.69kB ± 0% -28.90% (p=0.000 n=3+3) EndpointGeneration/1000/1-8 37.9kB ± 0% 35.9kB ± 0% -5.29% (p=0.000 n=3+3) EndpointGeneration/10000/1-8 312kB ± 5% 304kB ± 3% ~ (p=0.434 n=3+3) EndpointGeneration/100/100-8 89.9kB ± 0% 78.5kB ± 0% -12.76% (p=0.000 n=3+3) EndpointGeneration/1000/100-8 569kB ± 4% 533kB ± 9% ~ (p=0.310 n=3+3) name old allocs/op new allocs/op delta EndpointGeneration/100/1-8 60.7 ± 1% 21.0 ± 0% -65.38% (p=0.000 n=3+3) EndpointGeneration/1000/1-8 60.7 ± 1% 21.0 ± 0% -65.38% (p=0.000 n=3+3) EndpointGeneration/10000/1-8 63.3 ± 3% 29.0 ± 0% -54.21% (p=0.001 n=3+3) EndpointGeneration/100/100-8 1.20k ± 0% 0.96k ± 0% ~ (zero variance) EndpointGeneration/1000/100-8 4.38k ± 6% 3.90k ±14% ~ (p=0.225 n=3+3) ``` These metrics are both extremely costly and are actually not accurate. The metric tries to give a global view of the number of endpoints in a Service - this is wrong by definition, as each proxy will have a different view of EDS based on sidecar/network. * add release note * Update releasenotes/25519.yaml Co-authored-by: Brian Avery <bavery@redhat.com> Co-authored-by: Brian Avery <bavery@redhat.com>

view details

Nathan Mittler

commit sha 61ff73d3970ac696b0d38cfbea2dc9a72f258156

[Test Framework] Reduce use on Environment (#25522) This is additional cleanup to remove more direct use of the Environment API by tests and throughout the framework.

view details

Navraj Singh Chhina

commit sha 4ab58fdd51bc4e260ad05d68c852775ddf1f994e

Disable SDS Config generation for Sidecar Proxy when CredentialName is set (#25517) * fix incorrect SDS config generation * make gen

view details

John Howard

commit sha 131bd69e9469519a1a6d692784f4ac5f6438c0b3

Pin number of CPUs for benchmark (#25552) Right now its using 16 CPUs but is limited to 8. I think this leads to flaky results

view details

John Howard

commit sha 7353c84b560fd469123611476314e4aee553611d

Handle foreign instances by workload rather than endpoint (#25502) * Handle foreign instances by workload rather than endpoint Currently foreign instances are based on ServiceInstance. This has two issues: * They are derived from Endpoint, which has the Service -> Port mapping already done, breaking the port fields in ServiceEntry * They are derived from Endpoint, meaning you need a Service AND ServiceEntry to select a pod * They are basically hacking the ServiceInstance by passing a bunch of bogus fields in places that are relevant and then later overwriting them, rather than passing the info we actually know and letter the handlers work only on that info * format * Naming * fix race condition

view details

Tariq Ibrahim

commit sha 6d3679a693bd7fde6ad7d3826bd4719b16fbd412

pin a commit SHA of openshift/api as a dep instead (#25486)

view details

jacob-delgado

commit sha 8c6919d41f56978ec7bcb2ddad6bcbec51faf925

Add env to support RSA Key size for Istio self signed CA certificates (#25188) * Add env to support specification of CA generated certificates When user specifies CITADEL_CA_RSA_KEY_SIZE that key size is used to generate Istio CA certificates allowing for finer control of security. * Fix unit test * Run make gen

view details

Shriram Rajagopalan

commit sha 7124d7d2e90c4f35b09e3f4912e2d5e7f12626bc

fix target ports in workload entries (#25428) * fix target ports in workload entries Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * lint Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * nits * make gen Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * Revert "make gen" This reverts commit 79ad9ca53ef269bb4498eeac2f3c5f36e8864654. * revert go mod * no 9090 Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * fix headless target port Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * lint Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * undo security change

view details

Çağatay Gürtürk

commit sha b242d9937a7298c161eba9fdd195dea2f967f08a

Properly quote .Values.telemetry.v2.accessLogPolicy.logWindowDuration (#25551) * Properly quote .Values.telemetry.v2.accessLogPolicy.logWindowDuration value Fixes #25550 * Add generated files * Run make gen

view details

Steven Landow

commit sha c8a0a1b491b0c74652a9ae9837b8dc9058c17d82

test analyzer does not fail unlabeled tests when suite is skipped (#25427) * test analyzer allows skipped tests without feature * add target for analyzing specific suites

view details

push time in 2 days

issue closedistio/istio

hpaSpec and values required to set istiod HPA

[ ] Configuration Infrastructure [ ] Docs [x] Installation [ ] Networking [ ] Performance and Scalability [ ] Policies and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastructure

Expected behavior

When setting just the hpaSpec and nothing in values in the IstioOperator resource I expect an HPA to get created. Setting just an hpaSpec works for gateways.

The reason it doesn't is that the HPA isn't created at all so there is nothing to overlay with the translator. This is the offending line.

2020-07-31T17:50:13.730052Z	info	translator	resource Kind:name HorizontalPodAutoscaler:istiod doesn't exist in the output manifest, skip overlay.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: istiocontrolplane
  namespace: istio-system
spec:
  components:
    pilot:
      k8s:
       hpaSpec:
         maxReplicas: 10
         metrics:
         - resource:
           name: cpu
           targetAverageUtilization: 75
           type: Resource
         minReplicas: 1
         scaleTargetRef:
           apiVersion: extensions/v1beta1
           kind: Deployment
           name: istiod

closed time in 3 days

liamawhite

issue commentistio/istio

hpaSpec and values required to set istiod HPA

I just double-checked and that wasn't my issue, it was errant copy and pasting into the issue. However, I re-ran my test scenario without the values and it worked this time so I must have changed something.

liamawhite

comment created time in 3 days

issue commentistio/istio

istioctl analyze: Detect connectivity between specified Pods

We are also interested in this.

t-ide

comment created time in 5 days

issue closedistio/api

[release-1.7] Fix quantity and lock down others to a more specific type

Manual cherrypick required.

#1567 failed to apply on top of branch "release-1.7":

closed time in 7 days

istio-testing

issue commentistio/api

[release-1.7] Fix quantity and lock down others to a more specific type

manual cherry-pick has been merged

istio-testing

comment created time in 7 days

pull request commentistio/api

[Manual Cherry-Pick] Fix quantity and lock down others to a more specific type (#1567)

I think its probably safe to merge otherwise those tests would be required?

liamawhite

comment created time in 7 days

pull request commentistio/api

[Manual Cherry-Pick] Fix quantity and lock down others to a more specific type (#1567)

It seems to be failing on a merge conflict with master. This is the same failure that required me to do a manual cherry-pick. So ¯_(ツ)_/¯?

liamawhite

comment created time in 8 days

PR opened istio/api

[Manual Cherry-Pick] Fix quantity and lock down others to a more specific type (#1567)
  • Fix quantity and lock down others to correct IntOrString

Signed-off-by: Liam White liam@tetrate.io

  • fix imports

Signed-off-by: Liam White liam@tetrate.io

+580 -595

0 comment

7 changed files

pr created time in 8 days

PR closed istio/api

Reviewers
Set release managers as CODEOWNERS for release-1.7 (#1544) cla: yes needs-rebase size/L
+603 -624

1 comment

22 changed files

liamawhite

pr closed time in 8 days

PR opened istio/api

Reviewers
Set release managers as CODEOWNERS for release-1.7 (#1544)
+603 -624

0 comment

22 changed files

pr created time in 8 days

create barnchliamawhite/api

branch : release-1.7

created branch time in 8 days

Pull request review commentistio/istio

'istioctl ps <pod>' Show Pilot config if Envoy unresponsive

 Retrieves last sent and last acknowledged xDS sync from Istiod to each Envoy in 			} 			if len(args) > 0 { 				podName, ns := handlers.InferPodInfo(args[0], handlers.HandleNamespace(namespace, defaultNamespace))++				path := fmt.Sprintf("/debug/config_dump?proxyID=%s.%s", podName, ns)+				istiodDumps, err := kubeClient.AllDiscoveryDo(context.TODO(), istioNamespace, path)+				if err != nil {+					return err+				}++				// Contacting Envoy /config_dump can trigger it to disconnect from Istiod.  Don't

Networking probably makes sense. I agree it's probably difficult to actually weaponize it but it definitely seems like a bug.

esnible

comment created time in 9 days

push eventliamawhite/api

Liam White

commit sha 7c6baa2401c1991edf1f5a59cd8f9d477df57558

fix imports Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 12 days

push eventliamawhite/api

Liam White

commit sha 1cc958e0770e4db2a0fabde82ed449302b268c7d

Fix quantity and lock down others to correct IntOrString Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 12 days

push eventliamawhite/api

Istio Automation

commit sha 1bc30d922aa89a635baddd4c0d51b772c9547f0c

Automator: update common-files@master in istio/api@master (#1479)

view details

Istio Automation

commit sha 2e72df5eadd734b4e6f98951901e92be1748bd7f

Automator: update common-files@master in istio/api@master (#1480)

view details

Istio Automation

commit sha add85bd6bfe8383c17b9c243f8113a33bd34ddac

Automator: update common-files@master in istio/api@master (#1485)

view details

John Howard

commit sha bac02847c81b5827b450333d4e41ec9c9207b410

Move operator to gogo proto (#1483) * Move operator to gogo proto * switch to gogo * fix gen

view details

carolynhu

commit sha 6e8e32f7acc2f6f603ffc4775e75568e1078dff9

Add mesh ID to ProxyConfig (#1274) * introduce mesh_id field in ProxyConfig * address review comments

view details

Shamsher Ansari

commit sha 4b9355d6dae6eaeb9a293f6fc4d3756ee89fe5a0

Update gateway selector to be consistent with other names (#1473)

view details

Istio Automation

commit sha c07d1d63dab72b9647bfd4078aa72fd0e60bb078

Automator: update common-files@master in istio/api@master (#1486)

view details

Istio Automation

commit sha 4f9d78f4aed8742d035e40a9312bea90229c617e

Automator: update common-files@master in istio/api@master (#1490)

view details

Shamsher Ansari

commit sha 933b83065c19818dac2393f6f82b45c7a165b106

Update enable_auto_mtls default to true (#1489)

view details

Istio Automation

commit sha 19d61f093aabf614253708db1fcc04382058200b

Automator: update common-files@master in istio/api@master (#1494)

view details

Jason Wang

commit sha 87380418ee3266ba90a7245f29327801356f681a

Update CRDs to v1 (#1495) * Add script to check schema equality * Update CRDs to v1 * Update makefile

view details

Shamsher Ansari

commit sha 24dd11816ef268875dcbeccb02bfbc50ba008e27

Specify defaults for mesh and proxy configs (#1493) * Specify defaults for mesh and proxy configs * Update default for auth policy

view details

Istio Automation

commit sha 8bca8f68738867777a8b41948a430e56b073af8f

Automator: update common-files@master in istio/api@master (#1497)

view details

Istio Automation

commit sha 4eaf05f2696c07031c98526750de9956f0c46c5c

Automator: update common-files@master in istio/api@master (#1498)

view details

Istio Automation

commit sha 91341632e39e2feef0d41dc63972c740ea95f0c8

Automator: update common-files@master in istio/api@master (#1501)

view details

Jason Wang

commit sha 0503c9d9b606a86cbd97742c964c9bbb58259938

Add preserveUnknownFields to EnvoyFilter (#1500) * Add preserveUnknownFields to EnvoyFilter * make gen

view details

Shriram Rajagopalan

commit sha 42be9dcd33d06aaa4c9d8f567a014f184a792648

Add targetPort to ServiceEntry Port (#1477) * use targetPort for workloadEntries Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * fix workload entry Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * proto lock Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * undo deprecation Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * generate Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * regenerate Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * update docs Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * nits Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io> * reword Signed-off-by: Shriram Rajagopalan <rshriram@tetrate.io>

view details

Istio Automation

commit sha e6d1853c936f33dc192ded5ad6bda77167d2a378

Automator: update common-files@master in istio/api@master (#1502)

view details

Istio Automation

commit sha bc5bcf1ee342ec909353a3a4f4dabb914a75aa49

Automator: update common-files@master in istio/api@master (#1506)

view details

Jason Wang

commit sha 48874a55a924e6f13c0f26d679c792146ca26972

Generate x-kubernetes-preserve-unknown-fields at EnvoyFilter field level (#1507) * proto changes * Generate x-kubernetes-preserve-unknown-fields at EnvoyFilter field level

view details

push time in 12 days

PR opened istio/api

Reviewers
Fix quantity and lock down others to correct IntOrString

As demonstrated by https://github.com/istio/istio/pull/25538 the Quantity type is currently (has always been?) broken in the install API. This PR changes them all to IntOrString and adds code move the type inline with the actual YAML spec.

It also changes a couple of the other types to be their correct type as specified in the Kubernetes API (targetAverageUtilization) and changes some of the TypeInterfaces that should actually be IntOrStrings (e.g. ports).

Signed-off-by: Liam White liam@tetrate.io

+496 -539

0 comment

7 changed files

pr created time in 12 days

create barnchliamawhite/api

branch : intorstringify

created branch time in 12 days

Pull request review commentistio/istio

'istioctl ps <pod>' Show Pilot config if Envoy unresponsive

 Retrieves last sent and last acknowledged xDS sync from Istiod to each Envoy in 			} 			if len(args) > 0 { 				podName, ns := handlers.InferPodInfo(args[0], handlers.HandleNamespace(namespace, defaultNamespace))++				path := fmt.Sprintf("/debug/config_dump?proxyID=%s.%s", podName, ns)+				istiodDumps, err := kubeClient.AllDiscoveryDo(context.TODO(), istioNamespace, path)+				if err != nil {+					return err+				}++				// Contacting Envoy /config_dump can trigger it to disconnect from Istiod.  Don't

Am I missing something? It seems like a flaw that you can force envoy to disconnect from istiod by hitting /config_dump. Isn't that a potential DoS vector?

esnible

comment created time in 12 days

issue openedistio/istio

hpaSpec and values required to set istiod HPA

Affected product area (please put an X in all that apply)

[ ] Configuration Infrastructure [ ] Docs [x] Installation [ ] Networking [ ] Performance and Scalability [ ] Policies and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastructure

Expected behavior

When setting just the hpaSpec and nothing in values in the IstioOperator resource I expect an HPA to get created. Setting just an hpaSpec works for gateways.

The reason it doesn't is that the HPA isn't created at all so there is nothing to overlay with the translator. This is the offending line.

2020-07-31T17:50:13.730052Z	info	translator	resource Kind:name HorizontalPodAutoscaler:istiod doesn't exist in the output manifest, skip overlay.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: istiocontrolplane
  namespace: istio-system
spec:
  components:
  pilot:
    k8s:
     hpaSpec:
       maxReplicas: 10
       metrics:
       - resource:
         name: cpu
         targetAverageUtilization: 75
         type: Resource
       minReplicas: 1
       scaleTargetRef:
         apiVersion: extensions/v1beta1
         kind: Deployment
         name: istiod

created time in 12 days

Pull request review commentistio/istio

'istioctl ps <pod>' Show Pilot config if Envoy unresponsive

 Retrieves last sent and last acknowledged xDS sync from Istiod to each Envoy in 			} 			if len(args) > 0 { 				podName, ns := handlers.InferPodInfo(args[0], handlers.HandleNamespace(namespace, defaultNamespace))++				path := fmt.Sprintf("/debug/config_dump?proxyID=%s.%s", podName, ns)+				istiodDumps, err := kubeClient.AllDiscoveryDo(context.TODO(), istioNamespace, path)+				if err != nil {+					return err+				}++				// Contacting Envoy /config_dump can trigger it to disconnect from Istiod.  Don't

This sounds like we should raise an issue with Envoy?

esnible

comment created time in 13 days

Pull request review commentistio/istio

'istioctl ps <pod>' Show Pilot config if Envoy unresponsive

 import ( 	"istio.io/istio/pkg/kube" ) +const (+	// Used for comparison with Istiod configuration when sidecar unable to supply+	emptyEnvoyDump = `+{+		"configs": [+			{+				"@type": "type.googleapis.com/envoy.admin.v3.ListenersConfigDump"+			},+			{+				"@type": "type.googleapis.com/envoy.admin.v3.RoutesConfigDump"+			}

Do we need clusters? If not can you add a comment why?

esnible

comment created time in 14 days

Pull request review commentistio/istio

'istioctl ps <pod>' Show Pilot config if Envoy unresponsive

 Retrieves last sent and last acknowledged xDS sync from Istiod to each Envoy in 			} 			if len(args) > 0 { 				podName, ns := handlers.InferPodInfo(args[0], handlers.HandleNamespace(namespace, defaultNamespace))++				path := fmt.Sprintf("/debug/config_dump?proxyID=%s.%s", podName, ns)+				istiodDumps, err := kubeClient.AllDiscoveryDo(context.TODO(), istioNamespace, path)+				if err != nil {+					return err+				}++				// Contacting Envoy /config_dump can trigger it to disconnect from Istiod.  Don't

?!

esnible

comment created time in 14 days

issue commentistio/istio

DeepCopyInto of IstioOperatorSpec causes panic when patch value set

@knight42 I believe the long term plan is to migrate away from gogo/protobuf.

Tagging @howardjohn as he knows more.

liamawhite

comment created time in 14 days

pull request commentistio/istio

Quantity Istio operator tests and fix

@howardjohn yes, I opened the initial PR as an assertion that the API is currently broken. I'm discussing with @jasonwzm to properly root cause and come up with a good solution.

liamawhite

comment created time in 16 days

issue commentistio/istio

DeepCopyInto of IstioOperatorSpec causes panic when patch value set

This is used under the covers by the controller-runtime I believe. It calls json.Marshal of the kube wrapper for spec which presumably calls json.Marshal of the spec.

liamawhite

comment created time in 22 days

issue commentistio/istio

DeepCopyInto of IstioOperatorSpec causes panic when patch value set

I think the json marshal does work otherwise the operator wouldn't be able to use patches at all?

liamawhite

comment created time in 22 days

pull request commentistio/istio

Add testcase/documentation for add entry to unset list

/retest

liamawhite

comment created time in a month

push eventliamawhite/istio

Liam White

commit sha fe2d5c9dd4153999079baa9c504f508db89af09c

fix broken test and ensure we test want Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in a month

create barnchliamawhite/istio

branch : overlay-list-append

created branch time in a month

PR opened istio/istio

Add testcase/documentation for add entry to unset list

Signed-off-by: Liam White liam@tetrate.io

+20 -0

0 comment

1 changed file

pr created time in a month

PR opened istio/istio

Quantity Istio operator tests and fix do-not-merge/hold

Resolves https://github.com/istio/istio/issues/24839

I've added a failing test case that demonstrates the error and I think the only fix is to change all quantities to the custom intorstring type in the operator API. Does anyone have any objections?

Signed-off-by: Liam White liam@tetrate.io

[ ] Configuration Infrastructure [ ] Docs [x] Installation [ ] Networking [ ] Performance and Scalability [ ] Policies and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastructure

+81 -0

0 comment

2 changed files

pr created time in a month

create barnchliamawhite/istio

branch : hpa_fix

created branch time in a month

Pull request review commentistio/istio

XDS-based replacement for /debug/syncz

 func (sg *InternalGen) startPush(typeURL string, data []proto.Message) { // We can also expose ACKS. func (sg *InternalGen) Generate(proxy *model.Proxy, push *model.PushContext, w *model.WatchedResource, updates model.XdsUpdates) model.Resources { 	res := []*any.Any{}-	switch w.TypeUrl {++	// Look for ?query parameters.  We don't use url.Parse() because we have no scheme+	typeURL := w.TypeUrl+	var qparams url.Values+	if qIndex := strings.Index(w.TypeUrl, "?"); qIndex >= 0 {+		typeURL = w.TypeUrl[:qIndex]+		var err error+		qparams, err = url.ParseQuery(w.TypeUrl[qIndex+1:])+		if err != nil {+			adsLog.Infof("Invalid TypeUrl query params: %q", w.TypeUrl)+		}+	}++	switch typeURL { 	case TypeURLConnections: 		sg.Server.adsClientsMutex.RLock() 		// Create a temp map to avoid locking the add/remove 		for _, v := range sg.Server.adsClients { 			res = append(res, util.MessageToAny(v.xdsNode)) 		} 		sg.Server.adsClientsMutex.RUnlock()+	case TypeDebugSyncronization:+		res = sg.debugSyncz()+	case TypeDebugConfigDump:+		proxyID := qparams.Get("proxyID")+		if proxyID == "" {+			adsLog.Info("Config Dump w/o proxyID query parameter")+			break+		}+		res = sg.debugConfigDump(proxyID)+	default:+		adsLog.Infof("Unknown TypeUrl: %q", w.TypeUrl)+	}+	return res+}++func (sg *InternalGen) debugSyncz() []*any.Any {+	res := []*any.Any{}++	sg.Server.adsClientsMutex.RLock()+	for _, con := range sg.Server.adsClients {+		con.mu.RLock()+		// Skip "nodes" without metdata (they are probably istioctl queries!)+		if con.node != nil && con.node.Metadata != nil && con.node.Metadata.ProxyConfig != nil {+			xdsConfigs := []*status.PerXdsConfig{}+			for stype, watchedResource := range con.node.Active {+				pxc := &status.PerXdsConfig{+					Status: debugSyncStatus(watchedResource),+				}+				switch stype {+				case pilot_xds_v3.ListenerShortType:+					pxc.PerXdsConfig = &status.PerXdsConfig_ListenerConfig{}+				case pilot_xds_v3.RouteShortType:+					pxc.PerXdsConfig = &status.PerXdsConfig_RouteConfig{}+				case pilot_xds_v3.EndpointShortType:+					pxc.PerXdsConfig = &status.PerXdsConfig_EndpointConfig{}+				case pilot_xds_v3.ClusterShortType:+					pxc.PerXdsConfig = &status.PerXdsConfig_ClusterConfig{}+				default:+					adsLog.Warnf("Unknown watchedResource type: %q", stype)+				}+				xdsConfigs = append(xdsConfigs, pxc)+			}+			clientConfig := &status.ClientConfig{+				Node: &core.Node{+					Id: con.node.ID,+				},+				XdsConfig: xdsConfigs,+			}+			log.Infof("Trying to show status of %#v\n", clientConfig)+			res = append(res, util.MessageToAny(clientConfig))+		}+		con.mu.RUnlock() 	}+	sg.Server.adsClientsMutex.RUnlock()+ 	return res }++func debugSyncStatus(wr *model.WatchedResource) status.ConfigStatus {+	if wr.NonceSent == "" {+		return status.ConfigStatus_NOT_SENT

Stale and NACK are different things, I'm not sure @therealmitchconnors was asking for that though? There a couple of possibilities:

  • Sent but not yet ack'd > threshold -> STALE
  • Nacked -> ERROR
  • Ack'd SYNCED

I think also NOT_SENT should change to something like OK or just blank. Users get confused and think not sent indicates and error but it's actually fine.

esnible

comment created time in a month

Pull request review commentistio/istio

Config file and env var for istioctl --istioNamespace, --xds-address, and --cert-dir

 func defaultLogOptions() *log.Options { 	return o } +// ConfigAndEnvProcessing uses spf13/viper for overriding CLI parameters+func ConfigAndEnvProcessing() error {+	if IstioConfig != defaultIstioctlConfig {+		// Warn if user incorrectly customized $ISTIOCONFIG+		if _, err := os.Stat(IstioConfig); os.IsNotExist(err) {+			fmt.Fprintf(os.Stderr, "Warning: Configuration file %q does not exist\n", IstioConfig)

Do we want to continue or error here? I can imagine some CI/CD "silently" failing (if no one is reading the CI/CD output).

esnible

comment created time in a month

pull request commentistio/istio

[release-1.6] Add version label to `pilot_xds` metric

@googlebot I consent

istio-testing

comment created time in a month

Pull request review commentistio/istio

Config file and env var for istioctl --istioNamespace, --xds-address, and --cert-dir

 func defaultLogOptions() *log.Options { 	return o } +// ConfigAndEnvProcessing uses spf13/viper for overriding CLI parameters+func ConfigAndEnvProcessing() error {+	// Allow users to override some variables through $HOME/.istioctl/config.yaml+	// and environment variables.+	viper.SetEnvPrefix("ISTIOCTL")+	viper.AutomaticEnv()+	viper.SetConfigName("config") // name of config file (without extension)+	viper.SetConfigType("yaml")+	viper.AddConfigPath("$HOME/.istioctl")

I think this may break on windows? Maybe we should use https://github.com/mitchellh/go-homedir.

esnible

comment created time in a month

pull request commentistio/istio

Add version label to `pilot_xds` metric

/cherry-pick release-1.6

Not sure if there are specific criteria in place to qualify as a cherry-pick but we would like to deliver this to our customers currently running 1.6. Please deny the PR if this doesn't meet the criteria.

liamawhite

comment created time in a month

Pull request review commentistio/istio

Add version label to `pilot_xds` metric

 var (  	// TODO: Update all the resource stats in separate routine 	// virtual services, destination rules, gateways, etc.-	xdsClients = monitoring.NewGauge(

Saw this after, I updated the PR. Let me know if my changes align.

liamawhite

comment created time in a month

push eventliamawhite/istio

Liam White

commit sha c7921109797b5059bb3d914d2547e71872ec0b7f

back to gauge Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in a month

Pull request review commentistio/istio

Add version label to `pilot_xds` metric

 var (  	// TODO: Update all the resource stats in separate routine 	// virtual services, destination rules, gateways, etc.-	xdsClients = monitoring.NewGauge(

Wouldnt that still require recomputing each time? or are you suggesting having a gauge but with a value stored alongside it that we can increment and decrement before recording?

liamawhite

comment created time in a month

Pull request review commentistio/istio

Add version label to `pilot_xds` metric

 var (  	// TODO: Update all the resource stats in separate routine 	// virtual services, destination rules, gateways, etc.-	xdsClients = monitoring.NewGauge(

So I don't have to recompute (or keep track of in the discovery server) the version split every time I have to update a gauge. Which of sum, recompute or track would you prefer?

liamawhite

comment created time in a month

PR opened istio/istio

Reviewers
Add version label to `pilot_xds` metric

Signed-off-by: Liam White liam@tetrate.io

Adds a version label to pilot_xds to give more information on data plane versions with needing to scrape the entire data plane.

See https://istio.slack.com/archives/C38CF1PEC/p1594141641327900 for background.

[ ] Configuration Infrastructure [ ] Docs [ ] Installation [x] Networking [ ] Performance and Scalability [x] Policies and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastructure

+6 -5

0 comment

2 changed files

pr created time in a month

push eventliamawhite/istio

Liam White

commit sha 793b584f6fe719705c1cebd04057a2583d36aca7

Add version label to metric Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in a month

create barnchliamawhite/istio

branch : xds_version

created branch time in a month

push eventliamawhite/licenser

Yaroslav Skopets

commit sha 3557c6b73901fbeefaedbac9108e77794e18be7c

Add support for Rust

view details

Liam White

commit sha 0c305ecc2e7cb234fefbfaa8fd4a84f8093c4587

Merge pull request #10 from yskopets/feature/rust Add support for Rust

view details

push time in a month

PR merged liamawhite/licenser

Add support for Rust
+7 -0

0 comment

3 changed files

yskopets

pr closed time in a month

issue commentistio/istio

HPA specs with quantities causes install errors.

Could you try it with quotes around ""?

@ostromart I think the issue may be that Kube's marshalling relies on strings being wrapped in quotation marks. This isn't always true in YAML. I have a fix for a similar issue we had internally, do you want me to add something similar in the API repo?

func (intstrpb *IntOrString) UnmarshalJSONPB(_ *jsonpb.Unmarshaler, value []byte) error {
	// If its a string that isnt wrapped in quotes add them to appease kubernetes unmarshal
	if _, err := strconv.Atoi(string(value)); err != nil && len(value) > 0 && value[0] != '"' {
		value = append([]byte{'"'}, value...)
		value = append(value, '"')
	}
	return intstrpb.UnmarshalJSON(value)
}
liamawhite

comment created time in a month

issue commentistio/istio

HPA specs with quantities causes install errors.

That's interesting, did you mean it created the CR in kube but the operator completely ignored it? or did it just not let you create the CR?

liamawhite

comment created time in 2 months

issue openedistio/istio

HPA specs with quantities causes install errors.

Bug description

Setting quantities in the IstioOperator CR causes unmarshalling errors.

2020-06-19T14:57:34.491412Z	error	k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125: Failed to list *v1alpha1.IstioOperator: v1alpha1.IstioOperatorList.Items: []v1alpha1.IstioOperator: v1alpha1.IstioOperator.Status: Spec: unmarshalerDecoder: json: cannot unmarshal number into Go value of type map[string]json.RawMessage, error found in #10 byte of ...|d":true}}},"status":|..., bigger context ...|emo","values":{"pilot":{"autoscaleEnabled":true}}},"status":{"componentStatus":{"AddonComponents":{"|...

Affected product area (please put an X in all that apply)

[ ] Configuration Infrastructure [ ] Docs [x] Installation [ ] Networking [ ] Performance and Scalability [ ] Policies and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastructure

Expected behavior

To be able to install when setting quantities in the HPA spec

Steps to reproduce the bug

Create the below CR. It appears this is only quantities because if I change targetAverageValue: 1000 to targetAverageUtilization: 1000 then this works. The difference here is the latter is of type integer not Quantity

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: example-istiocontrolplane
spec:
  values:
    pilot:
      autoscaleEnabled: true
  profile: demo
  components:
    pilot:
      k8s:
        hpaSpec:
          scaleTargetRef:
            apiVersion: extensions/v1beta1
            kind: Deployment
            name: istiod
          minReplicas: 1
          maxReplicas: 5
          metrics:
          - type: Resource
            resource:
              name: cpu
              targetAverageValue: 1000

Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)

client version: 1.6.3
control plane version: 1.6.1
data plane version: 1.6.1 (12 proxies)

created time in 2 months

issue openedistio/istio

Install Overlays are Broken

Bug description

DeepCopyInto causes a panic when using any type in the patch value. This makes it impossible to use any patch in the operator. I have created a unit test reproducing it.

--- FAIL: TestIstioOperatorSpec_DeepCopyInto (0.00s)
panic: reflect: call of reflect.Value.Elem on int Value [recovered]
	panic: reflect: call of reflect.Value.Elem on int Value

goroutine 7 [running]:
testing.tRunner.func1.1(0x15ec7c0, 0xc00000fd00)
	/usr/local/Cellar/go/1.14.2_1/libexec/src/testing/testing.go:940 +0x2f5
testing.tRunner.func1(0xc0001986c0)
	/usr/local/Cellar/go/1.14.2_1/libexec/src/testing/testing.go:943 +0x3f9
panic(0x15ec7c0, 0xc00000fd00)
	/usr/local/Cellar/go/1.14.2_1/libexec/src/runtime/panic.go:969 +0x166
reflect.Value.Elem(0x15c61a0, 0x1757378, 0x82, 0x15c61a0, 0x1757378, 0x82)
	/usr/local/Cellar/go/1.14.2_1/libexec/src/reflect/value.go:820 +0x1a4
github.com/gogo/protobuf/proto.mergeAny(0x15e5da0, 0xc000035790, 0x194, 0x15e5da0, 0xc000035390, 0x194, 0x100, 0xc0001d2500)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:166 +0x471
github.com/gogo/protobuf/proto.mergeStruct(0x1650260, 0xc000035780, 0x199, 0x1650260, 0xc000035380, 0x199)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:108 +0x280
github.com/gogo/protobuf/proto.mergeAny(0x1650260, 0xc000035780, 0x199, 0x1650260, 0xc000035380, 0x199, 0x1625601, 0x0)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:236 +0x1724
github.com/gogo/protobuf/proto.mergeAny(0x1654de0, 0xc000010328, 0x196, 0x1654de0, 0xc0000102f0, 0x196, 0x100, 0x0)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:199 +0x70e
github.com/gogo/protobuf/proto.mergeAny(0x15b7d60, 0xc0001c6a40, 0x197, 0x15b7d60, 0xc000189290, 0x197, 0x100, 0xc0001d2300)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:231 +0x112c
github.com/gogo/protobuf/proto.mergeStruct(0x165e160, 0xc0001c6a10, 0x199, 0x165e160, 0xc000189260, 0x199)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:108 +0x280
github.com/gogo/protobuf/proto.mergeAny(0x165e160, 0xc0001c6a10, 0x199, 0x165e160, 0xc000189260, 0x199, 0x1625601, 0x0)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:236 +0x1724
github.com/gogo/protobuf/proto.mergeAny(0x1660320, 0xc000010308, 0x196, 0x1660320, 0xc0000102e8, 0x196, 0x100, 0x0)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:199 +0x70e
github.com/gogo/protobuf/proto.mergeAny(0x15b7d20, 0xc00016e348, 0x197, 0x15b7d20, 0xc00016e268, 0x197, 0x100, 0xc0001d1f00)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:231 +0x112c
github.com/gogo/protobuf/proto.mergeStruct(0x1685820, 0xc00016e2a0, 0x199, 0x1685820, 0xc00016e1c0, 0x199)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:108 +0x280
github.com/gogo/protobuf/proto.mergeAny(0x1685820, 0xc00016e2a0, 0x199, 0x1685820, 0xc00016e1c0, 0x199, 0x1, 0x0)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:236 +0x1724
github.com/gogo/protobuf/proto.mergeAny(0x167fee0, 0xc0001c69e8, 0x196, 0x167fee0, 0xc000189238, 0x196, 0x100, 0xc0001d3800)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:199 +0x70e
github.com/gogo/protobuf/proto.mergeStruct(0x1668240, 0xc0001c69a0, 0x199, 0x1668240, 0xc0001891f0, 0x199)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:108 +0x280
github.com/gogo/protobuf/proto.mergeAny(0x1668240, 0xc0001c69a0, 0x199, 0x1668240, 0xc0001891f0, 0x199, 0x15c7101, 0x0)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:236 +0x1724
github.com/gogo/protobuf/proto.mergeAny(0x165c660, 0xc00014c588, 0x196, 0x165c660, 0xc00014c408, 0x196, 0x100, 0xc0001d3200)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:199 +0x70e
github.com/gogo/protobuf/proto.mergeStruct(0x1670400, 0xc00014c580, 0x199, 0x1670400, 0xc00014c400, 0x199)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:108 +0x280
github.com/gogo/protobuf/proto.mergeAny(0x1670400, 0xc00014c580, 0x199, 0x1670400, 0xc00014c400, 0x199, 0x5a09901, 0x0)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:236 +0x1724
github.com/gogo/protobuf/proto.mergeAny(0x166d260, 0xc0001ac138, 0x196, 0x166d260, 0xc0001ac078, 0x196, 0x100, 0xc0001aea00)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:199 +0x70e
github.com/gogo/protobuf/proto.mergeStruct(0x167f1c0, 0xc0001ac0c0, 0x199, 0x167f1c0, 0xc0001ac000, 0x199)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:108 +0x280
github.com/gogo/protobuf/proto.Merge(0x1771540, 0xc0001ac0c0, 0x1771540, 0xc0001ac000)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:98 +0x3c1
github.com/gogo/protobuf/proto.Clone(0x1771540, 0xc0001ac000, 0x0, 0x0)
	/Users/liam/dev/pkg/mod/github.com/gogo/protobuf@v1.3.1/proto/clone.go:52 +0x18f
istio.io/api/operator/v1alpha1.(*IstioOperatorSpec).DeepCopyInto(...)
	/Users/liam/dev/src/istio.io/api/operator/v1alpha1/deepcopy.go:33
istio.io/api/operator/v1alpha1.TestIstioOperatorSpec_DeepCopyInto(0xc0001986c0)
	/Users/liam/dev/src/istio.io/api/operator/v1alpha1/deepcopy_test.go:40 +0x218
testing.tRunner(0xc0001986c0, 0x16c7718)
	/usr/local/Cellar/go/1.14.2_1/libexec/src/testing/testing.go:991 +0xdc
created by testing.(*T).Run
	/usr/local/Cellar/go/1.14.2_1/libexec/src/testing/testing.go:1042 +0x357
FAIL	istio.io/api/operator/v1alpha1	0.205s
FAIL

Affected product area (please put an X in all that apply)

[ ] Configuration Infrastructure [ ] Docs [x] Installation [ ] Networking [ ] Performance and Scalability [ ] Policies and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastructure

Affected features (please put an X in all that apply)

[x] Multi Cluster [x] Virtual Machine [x] Multi Control Plane

created time in 2 months

create barnchliamawhite/api

branch : deep-copy-into-panic

created branch time in 2 months

delete branch liamawhite/licenser

delete branch : liamawhite-patch-1

delete time in 2 months

PR closed liamawhite/licenser

Update issue templates
+20 -0

0 comment

1 changed file

liamawhite

pr closed time in 2 months

push eventliamawhite/licenser

Liam White

commit sha de2af28f496e7261ed812ba060859dc08d2bcd81

Update issue templates

view details

push time in 2 months

PR opened liamawhite/licenser

Update issue templates
+20 -0

0 comment

1 changed file

pr created time in 2 months

create barnchliamawhite/licenser

branch : liamawhite-patch-1

created branch time in 2 months

Pull request review commentopen-telemetry/opentelemetry-collector

Handle overlapping metrics from different jobs in prometheus exporter

 func (pe *prometheusExporter) Start(_ context.Context, _ component.Host) error { }  func (pe *prometheusExporter) ConsumeMetricsData(ctx context.Context, md consumerdata.MetricsData) error {+	merged := make(map[string]*metricspb.Metric) 	for _, metric := range md.Metrics {+		merge(merged, metric)+	}+	for _, metric := range merged { 		_ = pe.exporter.ExportMetric(ctx, md.Node, md.Resource, metric) 	} 	return nil } +// The underlying exporter overwrites timeseries when there are conflicting metric signatures.+// Therefore, we need to merge timeseries that share a metric signature into a single metric before sending.+func merge(m map[string]*metricspb.Metric, metric *metricspb.Metric) {+	key := metricSignature(metric)+	current, ok := m[key]+	if !ok {+		m[key] = metric+		return+	}+	current.Timeseries = append(current.Timeseries, metric.Timeseries...)+}++// Unique identifier of a given promtheus metric+// Assumes label keys are always in the same order+func metricSignature(metric *metricspb.Metric) string {+	var buf bytes.Buffer+	buf.WriteString(metric.GetMetricDescriptor().GetName())+	labelKeys := metric.GetMetricDescriptor().GetLabelKeys()+	for _, labelKey := range labelKeys {+		buf.WriteString("-" + labelKey.Key)

@tigrannajaryan I think we're good to merge then 🙂

liamawhite

comment created time in 2 months

Pull request review commentopen-telemetry/opentelemetry-collector

Handle overlapping metrics from different jobs in prometheus exporter

 func (pe *prometheusExporter) Start(_ context.Context, _ component.Host) error { }  func (pe *prometheusExporter) ConsumeMetricsData(ctx context.Context, md consumerdata.MetricsData) error {+	merged := make(map[string]*metricspb.Metric) 	for _, metric := range md.Metrics {+		merge(merged, metric)+	}+	for _, metric := range merged { 		_ = pe.exporter.ExportMetric(ctx, md.Node, md.Resource, metric) 	} 	return nil } +// The underlying exporter overwrites timeseries when there are conflicting metric signatures.+// Therefore, we need to merge timeseries that share a metric signature into a single metric before sending.+func merge(m map[string]*metricspb.Metric, metric *metricspb.Metric) {+	key := metricSignature(metric)+	current, ok := m[key]+	if !ok {+		m[key] = metric+		return+	}+	current.Timeseries = append(current.Timeseries, metric.Timeseries...)+}++// Unique identifier of a given promtheus metric+// Assumes label keys are always in the same order+func metricSignature(metric *metricspb.Metric) string {+	var buf bytes.Buffer+	buf.WriteString(metric.GetMetricDescriptor().GetName())+	labelKeys := metric.GetMetricDescriptor().GetLabelKeys()+	for _, labelKey := range labelKeys {+		buf.WriteString("-" + labelKey.Key)

It is, however, this signature function is a copy-paste from the exporter. I guess it doesn't hurt to change it though, would you like me to do so?

liamawhite

comment created time in 2 months

push eventliamawhite/opentelemetry-collector

Joe Elliott

commit sha 685777fc19853650bb0fd4922b4596f7c5f9b9a9

Add Grafana as an Adopter (#1095) * Add Grafana as an adopter Signed-off-by: Joe Elliott <number101010@gmail.com> * Grafana Labs Signed-off-by: Joe Elliott <number101010@gmail.com>

view details

Steve Flanders

commit sha 3c93591492db7e69189c78f86a3e153af3083c94

Switch from localhost to 0.0.0.0 by default for all receivers (#1006) Switch from localhost to 0.0.0.0 by default Link to tracking Issue: Addresses #592

view details

Dmitrii Anoshin

commit sha 8b9fe07e6495cd04d6c09033def87e539af10c18

Do not duplicate span kind in span.kind attribute in jaeger receivers (#1100) Since OTLP span kinds has 1-1 mapping with OpenTracing specification we don't need to keep additional "span.kind" value as was done in OC internal representation. The commit fixes the issue: https://github.com/open-telemetry/opentelemetry-collector/issues/878

view details

Nail Islamov

commit sha ccf10f0f334b6b5bc13b9b09fac43f7d8ee9a3a3

Add missing logging for metrics at 'debug' level (#1108)

view details

Dmitrii Anoshin

commit sha a71185a9cb28c8a332bab8f46e1a0492dce2acaf

Remove year from copyright header (#1106)

view details

Liam White

commit sha e37f37086515da2afb189804871a7f8b652f20f1

Handle overlapping metrics from different jobs in prom exporter Signed-off-by: Liam White <liam@tetrate.io>

view details

Liam White

commit sha 22e8963448c0b2ae6dbb13bef4c1559f67f5bdca

linting fixes Signed-off-by: Liam White <liam@tetrate.io>

view details

Liam White

commit sha 09cb2a191fc3d506cce701a7a4ffbabccf7c5eed

actually fix linting Signed-off-by: Liam White <liam@tetrate.io>

view details

Liam White

commit sha 8cde07392271c60933daa8704603d6005d7fdc70

make a little more efficient Signed-off-by: Liam White <liam@tetrate.io>

view details

Liam White

commit sha 56fd17304d64abde80e03c0a1508c622fabcfe13

Add comments explaining merge per review Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 2 months

pull request commentistio/istio.io

Add Tetrateio to partner page

/ok-to-test

tialouden

comment created time in 2 months

pull request commentopen-telemetry/opentelemetry-collector

Handle overlapping metrics from different jobs in prom exporter

Test failure looks unrelated?

liamawhite

comment created time in 2 months

Pull request review commentopen-telemetry/opentelemetry-collector

Handle overlapping metrics from different jobs in prom exporter

 func (pe *prometheusExporter) Start(_ context.Context, _ component.Host) error { }  func (pe *prometheusExporter) ConsumeMetricsData(ctx context.Context, md consumerdata.MetricsData) error {+	merged := make(map[string]*metricspb.Metric) 	for _, metric := range md.Metrics {+		merge(merged, metric)+	}+	for _, metric := range merged { 		_ = pe.exporter.ExportMetric(ctx, md.Node, md.Resource, metric) 	} 	return nil } +func merge(m map[string]*metricspb.Metric, metric *metricspb.Metric) {

done 🙂

liamawhite

comment created time in 2 months

push eventliamawhite/opentelemetry-collector

Liam White

commit sha ee27512d07caf5e2f818c06bc5367e27662204ed

Add comments explaining merge per review Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 2 months

push eventliamawhite/opentelemetry-collector

Liam White

commit sha 158feafc8b41c3c8ad793af41f5e171ed028638d

make a little more efficient Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 2 months

push eventliamawhite/opentelemetry-collector

Liam White

commit sha 34e58ba3e57c68c76131e64850a98d2f2e74bc02

actually fix linting Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 2 months

push eventliamawhite/opentelemetry-collector

Liam White

commit sha 66b034a14f1ff9702849585c34a4832cc83bffa5

linting fixes Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 2 months

PR closed orijtech/prometheus-go-metrics-exporter

handle multiple metrics with the same signature

OpenTelemetry Collector splits the same metric across multiple calls to ExportMetric when it scrapes the same metric from multiple targets. This manifests in our setup with all grpc metrics clobbering one another.

See https://github.com/open-telemetry/opentelemetry-collector/issues/1076 for background.

Signed-off-by: Liam White liam@tetrate.io

+53 -4

2 comments

3 changed files

liamawhite

pr closed time in 2 months

pull request commentorijtech/prometheus-go-metrics-exporter

handle multiple metrics with the same signature

Closing in favour of https://github.com/open-telemetry/opentelemetry-collector/pull/1096

liamawhite

comment created time in 2 months

push eventliamawhite/opentelemetry-collector

Liam White

commit sha 735a234640d40d8bedd4431907de7f57f8cb326c

Handle overlapping metrics from different jobs in prom exporter Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 2 months

push eventliamawhite/opentelemetry-collector

Liam White

commit sha 4f9bf04de294c56e53a1401c9b9cf221409603f7

more debug Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 2 months

pull request commentorijtech/prometheus-go-metrics-exporter

handle multiple metrics with the same signature

This doesn't quite work as expected. It seems to work the first time and then throws errors.

2020-06-08T22:56:41.632Z	INFO	loggingexporter/logging_exporter.go:169	MetricsExporter	{"#metrics": 4}
2020-06-08T22:56:55.594Z	INFO	loggingexporter/logging_exporter.go:169	MetricsExporter	{"#metrics": 4}
{"level":"error","ts":1591657021.8221378,"caller":"prometheusexporter/factory.go:69","msg":"error gathering metrics:","component_kind":"exporter","component_type":"prometheus","component_name":"prometheus","stacktrace":"go.opentelemetry.io/collector/exporter/prometheusexporter.logErrorWrapper.Println\n\t/Users/liam/dev/src/github.com/open-telemetry/opentelemetry-collector/exporter/prometheusexporter/factory.go:69\ngithub.com/prometheus/client_golang/prometheus/promhttp.HandlerFor.func1\n\t/Users/liam/dev/pkg/mod/github.com/prometheus/client_golang@v1.5.1/prometheus/promhttp/http.go:129\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/Cellar/go/1.14.2_1/libexec/src/net/http/server.go:2012\ngithub.com/orijtech/prometheus-go-metrics-exporter.(*Exporter).ServeHTTP\n\t/Users/liam/dev/src/github.com/orijtech/prometheus-go-metrics-exporter/prometheus.go:157\nnet/http.(*ServeMux).ServeHTTP\n\t/usr/local/Cellar/go/1.14.2_1/libexec/src/net/http/server.go:2387\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/Cellar/go/1.14.2_1/libexec/src/net/http/server.go:2807\nnet/http.(*conn).serve\n\t/usr/local/Cellar/go/1.14.2_1/libexec/src/net/http/server.go:1895"}
{"level":"error","ts":1591657021.8223276,"caller":"prometheusexporter/factory.go:69","msg":"4 error(s) occurred:\n* collected metric \"promexample_opdemo_latency\" { label:<name:\"client\" value:\"cli\" > label:<name:\"label1\" value:\"value1\" > label:<name:\"method\" value:\"repl\" > label:<name:\"source\" value:\"source2\" > histogram:<sample_count:41 sample_sum:26497.543999999998 bucket:<cumulative_count:5 upper_bound:10 > bucket:<cumulative_count:17 upper_bound:50 > bucket:<cumulative_count:31 upper_bound:100 > bucket:<cumulative_count:31 upper_bound:200 > bucket:<cumulative_count:33 upper_bound:400 > bucket:<cumulative_count:35 upper_bound:800 > bucket:<cumulative_count:36 upper_bound:1000 > bucket:<cumulative_count:38 upper_bound:1400 > bucket:<cumulative_count:38 upper_bound:2000 > bucket:<cumulative_count:40 upper_bound:5000 > bucket:<cumulative_count:40 upper_bound:10000 > bucket:<cumulative_count:41 upper_bound:15000 > > } was collected before with the same name and label values\n* collected metric \"promexample_opdemo_process_counts\" { label:<name:\"client\" value:\"cli\" > label:<name:\"label1\" value:\"value1\" > label:<name:\"method\" value:\"repl\" > label:<name:\"source\" value:\"source2\" > counter:<value:41 > } was collected before with the same name and label values\n* collected metric \"promexample_opdemo_line_lengths\" { label:<name:\"client\" value:\"cli\" > label:<name:\"label1\" value:\"value1\" > label:<name:\"method\" value:\"repl\" > label:<name:\"source\" value:\"source2\" > histogram:<sample_count:115 sample_sum:54183 bucket:<cumulative_count:0 upper_bound:10 > bucket:<cumulative_count:2 upper_bound:20 > bucket:<cumulative_count:6 upper_bound:50 > bucket:<cumulative_count:17 upper_bound:100 > bucket:<cumulative_count:22 upper_bound:150 > bucket:<cumulative_count:27 upper_bound:200 > bucket:<cumulative_count:62 upper_bound:500 > bucket:<cumulative_count:93 upper_bound:800 > > } was collected before with the same name and label values\n* collected metric \"promexample_opdemo_line_counts\" { label:<name:\"client\" value:\"cli\" > label:<name:\"label1\" value:\"value1\" > label:<name:\"method\" value:\"repl\" > label:<name:\"source\" value:\"source2\" > counter:<value:115 > } was collected before with the same name and label values","component_kind":"exporter","component_type":"prometheus","component_name":"prometheus","stacktrace":"go.opentelemetry.io/collector/exporter/prometheusexporter.logErrorWrapper.Println\n\t/Users/liam/dev/src/github.com/open-telemetry/opentelemetry-collector/exporter/prometheusexporter/factory.go:69\ngithub.com/prometheus/client_golang/prometheus/promhttp.HandlerFor.func1\n\t/Users/liam/dev/pkg/mod/github.com/prometheus/client_golang@v1.5.1/prometheus/promhttp/http.go:129\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/Cellar/go/1.14.2_1/libexec/src/net/http/server.go:2012\ngithub.com/orijtech/prometheus-go-metrics-exporter.(*Exporter).ServeHTTP\n\t/Users/liam/dev/src/github.com/orijtech/prometheus-go-metrics-exporter/prometheus.go:157\nnet/http.(*ServeMux).ServeHTTP\n\t/usr/local/Cellar/go/1.14.2_1/libexec/src/net/http/server.go:2387\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/Cellar/go/1.14.2_1/libexec/src/net/http/server.go:2807\nnet/http.(*conn).serve\n\t/usr/local/Cellar/go/1.14.2_1/libexec/src/net/http/server.go:1895"}```

Can confirm that the first value set never changes as well.
liamawhite

comment created time in 2 months

push eventliamawhite/prometheus-go-metrics-exporter

Liam White

commit sha 396b19a1a69b9a243912a42b2dc3a0b802837c91

continue on error by default Signed-off-by: Liam White <liam@tetrate.io>

view details

Emmanuel T Odeke

commit sha a37c735ee1b837691ebc084b5836a06b0d6f0199

Merge pull request #6 from liamawhite/better-default Use promhttp.ContinueOnError for best effort metrics serving

view details

James Bebbington

commit sha 52bb9c41695f801b29f203a175c2933e7a710457

Correctly set ValueType to Counter or Gauge

view details

Emmanuel T Odeke

commit sha 7197eacefa530854185b0ca3e0f877ea15924f63

Merge pull request #8 from james-bebbington/master Correctly set ValueType to Counter or Gauge Uses the already derivedType parameter that was mistakenly not being used.

view details

Liam White

commit sha 0ec14a1cd62b47449cb44e238d6f76c8d2cba871

handle multiple metrics with the same signature Signed-off-by: Liam White <liam@tetrate.io>

view details

Liam White

commit sha 1738b427bc9463efee9eca226b007eae351397a2

add ability to configure logger Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 2 months

push eventliamawhite/prometheus-go-metrics-exporter

Liam White

commit sha 0c77bed48579738f4c2123a4e6273a3c4200dbee

add ability to configure logger Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 2 months

pull request commentorijtech/prometheus-go-metrics-exporter

handle multiple metrics with the same signature

This doesn't fully fix the issue. When running the scenario mention in the collector issue I get...

An error has occurred while serving metrics:

20 error(s) occurred:
* collected metric "promexample_opdemo_latency" { label:<name:"client" value:"cli" > label:<name:"label1" value:"value1" > label:<name:"method" value:"repl" > label:<name:"source" value:"source1" > histogram:<sample_count:68 sample_sum:57364.5075 bucket:<cumulative_count:0 upper_bound:10 > bucket:<cumulative_count:22 upper_bound:50 > bucket:<cumulative_count:47 upper_bound:100 > bucket:<cumulative_count:52 upper_bound:200 > bucket:<cumulative_count:54 upper_bound:400 > bucket:<cumulative_count:61 upper_bound:800 > bucket:<cumulative_count:62 upper_bound:1000 > bucket:<cumulative_count:62 upper_bound:1400 > bucket:<cumulative_count:62 upper_bound:2000 > bucket:<cumulative_count:65 upper_bound:5000 > bucket:<cumulative_count:65 upper_bound:10000 > bucket:<cumulative_count:67 upper_bound:15000 > > } was collected before with the same name and label values
...
liamawhite

comment created time in 2 months

issue commentopen-telemetry/opentelemetry-collector

Problem when scraping metrics from multiple targets that expose equally named metrics

Speculative fix https://github.com/orijtech/prometheus-go-metrics-exporter/pull/10

jhengy

comment created time in 2 months

push eventliamawhite/prometheus-go-metrics-exporter

Liam White

commit sha c25639910447bafff072040f111615638c53c972

handle multiple metrics with the same signature Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 2 months

push eventliamawhite/prometheus-go-metrics-exporter

Liam White

commit sha d00574f6cc40e539c1ea1262a948dda6c1d89b8e

handle multiple metrics with the same signature Signed-off-by: Liam White <liam@tetrate.io>

view details

push time in 2 months

create barnchliamawhite/prometheus-go-metrics-exporter

branch : double-metric

created branch time in 2 months

issue commentopen-telemetry/opentelemetry-collector

Problem when scraping metrics from multiple targets that expose equally named metrics

I have a branch with a repro in. https://github.com/liamawhite/opentelemetry-collector/tree/otel-debug/examples/demo

It seems this must happen somewhere after the receiver as @jhengy was seeing this issue using the Prometheus receiver and the demo uses the OC receiver.

This seems like a pretty critical bug?

jhengy

comment created time in 2 months

create barnchliamawhite/opentelemetry-collector

branch : otel-debug

created branch time in 2 months

pull request commentistio/istio

WIP: Intial implementation of XDS based proxystatus

Approach overall looks good, this is a nicer solution than the one we were discussing and decouples us from Kube.

esnible

comment created time in 3 months

pull request commentistio/istio

resolve nil pointer deference if kubectl context doesn't exist, fixes…

/ok-to-test

zillani

comment created time in 3 months

more