profile
viewpoint

aad/booklit 0

a pretty lit content authoring system

aad/bosh-bootloader 0

Command line utility for standing up a BOSH director on an IAAS of your choice.

aad/k3os-vagrant 0

Use k3os in Vagrant!

aad/paas-tools-go 0

Debian (stretch) based container with various tools installed and go 1.10

aad/pykube 0

Lightweight Python 3.6+ client library for Kubernetes (pykube-ng)

issue openedLukeSmithxyz/voidrice

Keymapping Location

Hello, I got a fresh Artix install then put this atop of it, works pretty good!

I would like to ask, where are the super+[key] remappings?

For example, super+e opens email, but let's say for example I want to put another program to open with that key I found the mappings of so many other programs, but I can't find this - it's not in the super+F1 either

created time in 6 minutes

issue commentalauda/kube-ovn

Use static IP for VM that live migration will be reports IP conflict

xref https://github.com/kubevirt/kubevirt/issues/4910

ironxyz

comment created time in 30 minutes

issue openedalauda/kube-ovn

Use static IP for VM that live migration will be reports IP conflict

What happened: kube ovn solves static IP for VM, but meets a problem while doing VM live migration, if it binds the same IP to the migration pod, it reports IP conflict as the two PODs have the same IP. image image image image image

created time in an hour

PR closed avelino/awesome-go

Added go-lumen need coverage needs tests pending-submitter-response stale

Please check if what you want to add to awesome-go list meets quality standards before sending pull request. Thanks!

Please provide package links to:

  • github.com repo: https://github.com/go-lumen/lumen-api
  • pkg.go.dev: https://pkg.go.dev/github.com/go-lumen/lumen-api
  • goreportcard.com: https://goreportcard.com/report/github.com/go-lumen/lumen-api
  • coverage service link (codecov, coveralls, gocover etc.): https://codecov.io/gh/go-lumen/lumen-api

Very good coverage

Note: that new categories can be added only when there are 3 packages or more.

Make sure that you've checked the boxes below before you submit PR:

  • [x] I have added my package in alphabetical order.
  • [x] I have an appropriate description with correct grammar.
  • [x] I know that this package was not listed before.
  • [x] I have added pkg.go.dev link to the repo and to my pull request.
  • [x] I have added coverage service link to the repo and to my pull request.
  • [x] I have added goreportcard link to the repo and to my pull request.
  • [x] I have read Contribution guidelines, maintainers note and Quality standard.

Thanks for your PR, you're awesome! :+1:

+1 -0

3 comments

1 changed file

adrien3d

pr closed time in 2 hours

startedberty/berty

started time in 4 hours

push eventjbranchaud/til

jbranchaud

commit sha c893b3c984b6db3aeba5c0740cfdae6dc5b53d9b

Add Open The Selected Lines In GitHub With Gbrowse as a vim til

view details

push time in 7 hours

push eventFidelityInternational/certbot-dns-qip

Willcock, Stuart

commit sha 42df0a47f6d887727f2a918a094c267837eb41d2

wip

view details

push time in 10 hours

Pull request review commentcloudfoundry/bosh-vsphere-cpi-release

New VM type cloud properties for CPU and Memory reservation

 def config_spec_params       params = {}       params[:num_cpus] = vm_type.cpu       params[:memory_mb] = vm_type.ram+      params[:cpu_allocation] = VimSdk::Vim::ResourceAllocationInfo.new(reservation: @maxMhz * vm_type.cpu) if vm_type.cpu_reserve_full_mhz       params[:nested_hv_enabled] = true if vm_type.nested_hardware_virtualization+      params[:memory_reservation_locked_to_max] = true if vm_type.memory_reservation_locked_to_max     

The memory calculation actually fits with this:

  • it uses the memory available on the entire cluster , or looks per-host in the case of a MUST rule host-group
  • if the memory is available, that means it is available to be locked. primarily this setting prevents ballooning or swapping

With regards to CPU, the BOSH CPI is delegating to vsphere DRS to find the right host with the requirements. When setting a CPU reservation, DRS will error the create_VM call if there isn't enough clock cycles available on the host.

This one is a bit more possible to error out from DRS because we aren't calculating CPU capacity at all, but I'm not sure it matters that much (if create_vm fails on a given cluster, the vm_creator will just swallow the exception and try the next cluster, no?)

svrc

comment created time in 10 hours

issue closedorange-cloudfoundry/paas-templates

align vsphere prometheus long term metrics management with external s3

Expected behavior

As a vsphere operator i want to externalize

Observed behavior

<!--

Affected release

Reproduced on version x.y --> <!-- specify release note version here -->

<!--

Traces and logs

Remember this is a public repo. DON'T leak credentials or Orange internal URLs. Automation may be applied in the future.

closed time in 12 hours

poblin-orange

push eventalauda/kube-ovn

Yan Zhu

commit sha 2395e8cc661ea11b98975b402faffa2c5148161f

wip: support hybrid mode for geneve and vlan

view details

Yan Zhu

commit sha 8dcab10f18629c5fc3c439c407a7a1d16d0367be

fix build dev image

view details

push time in 13 hours

issue closedorange-cloudfoundry/paas-templates

thanos issue: bucket store initial sync: read dir: open /var/thanos/store: too many open files

Expected behavior

  • As a paas-templates operator |
  • In order to
    • ensure metrics are long term saved with thanos
  • I need
    • fix the fs limitation reached

Observed behavior

master-depls/prometheus/thanos-store, docker logs

thanos-store/94faedd1-31d9-4095-aa1c-b74c26bdc5e6:~# docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED              STATUS              PORTS                                  NAMES
7d2929267a8c        paas/thanos_store:50.0.0     "/bin/thanos store -…"   About a minute ago   Up About a minute   0.0.0.0:10901-10902->10901-10902/tcp   thanos_store

level=warn ts=2021-01-26T18:21:59.200492008Z caller=bucket.go:452 msg="failed to remove block we cannot load" err="open /var/thanos/store: too many open files"
level=warn ts=2021-01-26T18:21:59.200527564Z caller=bucket.go:454 msg="loading block failed" elapsed=228.897448ms id=01EQMSMFT1N45TKVJY1X4X33CZ err="create index header reader: write index header: new binary index header writer: open /var/thanos/store/01EQMSMFT1N45TKVJY1X4X33CZ: too many open files"
level=info ts=2021-01-26T18:22:02.546784924Z caller=bucket.go:456 msg="loaded new block" elapsed=17.807137573s id=01DYKJ9G04RJCPWSXGGV9X9YNV
level=warn ts=2021-01-26T18:22:02.547696865Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason="bucket store initial sync: read dir: open /var/thanos/store: too many open files"
level=info ts=2021-01-26T18:22:02.547763565Z caller=intrumentation.go:48 msg="changing probe status" status=ready
level=info ts=2021-01-26T18:22:02.547789361Z caller=http.go:81 service=http/server component=store msg="internal server shutdown" err="bucket store initial sync: read dir: open /var/thanos/store: too many open files"
level=info ts=2021-01-26T18:22:02.547854997Z caller=grpc.go:119 service=gRPC/server component=store msg="listening for serving gRPC" address=0.0.0.0:10901
level=info ts=2021-01-26T18:22:02.547864755Z caller=intrumentation.go:66 msg="changing probe status" status=not-healthy reason="bucket store initial sync: read dir: open /var/thanos/store: too many open files"
level=warn ts=2021-01-26T18:22:02.547905705Z caller=intrumentation.go:54 msg="changing probe status" status=not-ready reason="bucket store initial sync: read dir: open /var/thanos/store: too many open files"
level=info ts=2021-01-26T18:22:02.547932598Z caller=grpc.go:138 service=gRPC/server component=store msg="gracefully stopping internal server"
level=info ts=2021-01-26T18:22:02.547984658Z caller=grpc.go:150 service=gRPC/server component=store msg="internal server shutdown" err="bucket store initial sync: read dir: open /var/thanos/store: too many open files"
level=error ts=2021-01-26T18:22:02.548135146Z caller=main.go:211 err="open /var/thanos/store: too many open files\nread dir\ngithub.com/thanos-io/thanos/pkg/store.(*BucketStore).InitialSync\n\t/go/src/github.com/thanos-io/thanos/pkg/store/bucket.go:413\nmain.runStore.func4\n\t/go/src/github.com/thanos-io/thanos/cmd/thanos/store.go:323\ngithub.com/oklog/run.(*Group).Run.func1\n\t/go/pkg/mod/github.com/oklog/run@v1.1.0/group.go:38\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1373\nbucket store initial sync\nmain.runStore.func4\n\t/go/src/github.com/thanos-io/thanos/cmd/thanos/store.go:325\ngithub.com/oklog/run.(*Group).Run.func1\n\t/go/pkg/mod/github.com/oklog/run@v1.1.0/group.go:38\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1373\nstore command failed\nmain.main\n\t/go/src/github.com/thanos-io/thanos/cmd/thanos/main.go:211\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203\nruntime.go
root     31449 31426 13 18:36 ?        00:00:12 /bin/thanos store --log.level=info --data-dir /var/thanos/store --objstore.config-file=/config/bucket_config.yaml
r
thanos-store/94faedd1-31d9-4095-aa1c-b74c26bdc5e6:/var/vcap/store/containers/thanos_store# cat /proc/31449/limits
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             128551               128551               processes 
Max open files            8192                 8192                 files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       128551               128551               signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us        

Affected releases

  • 49.0.0 (openstack hws)
  • thanos 0.14

references

  • https://github.com/cloudfoundry-incubator/docker-boshrelease/blob/master/jobs/containers/spec#L56

<!--

Traces and logs

Remember this is a public repo. DON'T leak credentials or Orange internal URLs. Automation may be applied in the future.

closed time in 13 hours

poblin-orange

issue closedorange-cloudfoundry/paas-templates

no more cf logs after cf-deployment 15 bump /bosh dns cert rotation

Expected behavior

  • As a paas-templates operator | cf user | marketplace user | osb service provider | paas-templates author | paas-templates maintainer
  • In order to
  • I need

Observed behavior

  • cf logs and cf logs recent do not give any logs.

  • cf smoke tests failure:

  • zooming in diego-cell bosh dns logs:

[HealthChecker] 2021-01-25T11:22:56.995583000Z WARN - network error connecting to 192.168.35.64: Performing GET request: Get "https://192.168.35.64:8853/health": x509: certificate is valid for *.health.bosh-dns, not health.bosh-dns
[HealthChecker] 2021-01-25T11:22:56.995766000Z WARN - network error connecting to 192.168.35.65: Performing GET request: Get "https://192.168.35.65:8853/health": x509: certificate is valid for *.health.bosh-dns, not health.bosh-dns
[HealthChecker] 2021-01-25T11:22:56.995960000Z WARN - network error connecting to 192.168.35.71: Performing GET request: Get "https://192.168.35.71:8853/health": x509: certificate is valid for *.health.bosh-dns, not health.bosh-dns
[HealthChecker] 2021-01-25T11:22:56.995987000Z WARN - network error connecting to 192.168.35.30: Performing GET request: Get "https://192.168.35.30:8853/health": x509: certificate is valid for *.health.bosh-dns, not health.bosh-dns
[HealthChecker] 2021-01-25T11:22:56.996000000Z WARN - network error connecting to 192.168.35.67: Performing GET request: Get "https://192.168.35.67:8853/health": x509: certificate is valid for *.health.bosh-dns, not health.bosh-dns
[HealthChecker] 2021-01-25T11:22:56.996145000Z WARN - network error connecting to 192.168.35.76: Performing GET request: Get "https://192.168.35.76:8853/health": x509: certificate is valid for *.health.bosh-dns, not health.bosh-dns
[HealthChecker] 2021-01-25T11:22:56.996438000Z WARN - network error connecting to 192.168.35.31: Performing GET request: Get "https://192.168.35.31:8853/health": x509: certificate is valid for *.health.bosh-dns, not health.bosh-dns
[HealthChecker] 2021-01-25T11:22:56.997350000Z WARN - network error connecting to 192.168.35.63: Performing GET request: Get "https://192.168.35.63:8853/health": x509: certificate is valid for *.health.bosh-dns, not health.bosh-dns
[HealthChecker] 2021-01-25T11:22:56.997482000Z WARN - network error connecting to 192.168.35.72: Performing GET request: Get "https://192.168.35.72:8853/health": x509: certificate is valid for *.health.bosh-dns, not health.bosh-dns

references

  • https://github.com/cloudfoundry/cf-deployment/issues/905

Affected releases

  • 50.0.0 ?
  • earlier versions

<!--

Traces and logs

Remember this is a public repo. DON'T leak credentials or Orange internal URLs. Automation may be applied in the future.

closed time in 13 hours

poblin-orange

issue closedorange-cloudfoundry/paas-templates

k8s backup to s3 shoud use intranet proxy on vsphere iaas-type

Expected behavior

  • As a paas-templates operator, running on vsphere iaas-type
  • In order to
    • keep my backup on premise
  • I need
    • the k8s backup solution to be reached via intranet-proxy (not internet-proxy)

Observed behavior

Affected releases

  • 49.0.2
  • earlier versions

<!--

Traces and logs

Remember this is a public repo. DON'T leak credentials or Orange internal URLs. Automation may be applied in the future.

closed time in 14 hours

poblin-orange

issue openedorange-cloudfoundry/paas-templates

COAB fails with `Disk quota exceeded while cloning`

Expected behavior

  • As a paas-templates operator
  • In order to ensure control plane SLA
  • I need coab to not fail on user requests

Observed behavior

2021-01-27T10:01:52.54+0000 [APP/PROC/WEB/0] OUT 2021-01-27 10:01:52.540  WARN 7 --- [nio-8080-exec-4] c.o.o.c.b.o.o.git.SimpleGitManager       : [paas-secrets.] caught org.eclipse.jgit.api.errors.JGitInternalException: Disk quota exceeded while cloning into dir: /home/vcap/tmp/paas-secrets-clone-2036587114671081554 Cleaning up local clone.

The disk usage on a normal instance on lab environment is quite close to disk quota image

Disk usage is due to secrets repo clones pool

$ cf ssh coa-mongodb-broker -c 'du -skh ./*; du -skh ./tmp/*; ls -al ./tmp'
160M    ./app
4.0K    ./deps
0    ./logs
0    ./profile.d
4.0K    ./staging_info.yml
263M    ./tmp
100M    ./tmp/paas-secrets-clone-3595955914359928823
100M    ./tmp/paas-secrets-clone-5985644440137189667
63M    ./tmp/paas-templates-clone-5445877573471240098
0    ./tmp/tomcat.1698924466642891968.8080
0    ./tmp/tomcat-docbase.2139178613153011778.8080
total 12
drwxr-xr-x  7 vcap vcap  232 Jan 27 11:06 .
drwx------  1 vcap vcap  114 Jan 27 12:45 ..
drwx------ 17 vcap vcap 4096 Jan 27 11:21 paas-secrets-clone-3595955914359928823
drwx------ 18 vcap vcap 4096 Jan 27 11:06 paas-secrets-clone-5985644440137189667
drwx------ 18 vcap vcap 4096 Jan 27 11:06 paas-templates-clone-5445877573471240098
drwxr-xr-x  3 vcap vcap   18 Jan 27 11:05 tomcat.1698924466642891968.8080
drwxr-xr-x  2 vcap vcap    6 Jan 27 11:05 tomcat-docbase.2139178613153011778.8080

The pool is configured to use one clone per concurrent request, and does not specify a maximum number of idle elements in the pool: should reach max number of concurrent OSB requests (outside /catalog)

Alternative fixes

  • bump the disk quota to 1,5 GB
  • tune the clone pool

Affected releases

  • 49.x
  • earlier versions

<!--

Traces and logs

Remember this is a public repo. DON'T leak credentials or Orange internal URLs. Automation may be applied in the future.

created time in 14 hours

GollumEvent
GollumEvent
GollumEvent

push eventalauda/kube-ovn

Yan Zhu

commit sha 01fd49b2eb55c043e10843529b8adb5498f0924b

fix build dev image

view details

push time in 15 hours

issue commentorange-cloudfoundry/paas-templates

multi-region: remote-r2/remote 3 compilation networks misconfiguration

Should be developed and tested on vpshere integration env to reduce risks

poblin-orange

comment created time in 17 hours

pull request commentalauda/kube-ovn

fix: ip6tables check error

Good catch, thanks @kylin-D-ZZ

kylin-D-ZZ

comment created time in 17 hours

push eventalauda/kube-ovn

cmj

commit sha d594554d73f83d7d5b9e601533c8fb27f225dc47

fix: ip6tables check error

view details

Oilbeater

commit sha 0ccca06908e20a359fb53fd6e4b493eba4b8f3f6

Merge pull request #662 from kylin-D-ZZ/master fix: ip6tables check error

view details

push time in 17 hours

PR merged alauda/kube-ovn

fix: ip6tables check error

kubectl logs -n kube-system kube-ovn-cni-2gxbr

E0127 09:46:06.286596 3059 gateway.go:66] failed to set gw iptables I0127 09:46:07.413709 3059 server.go:62] [2021-01-27T09:46:07Z] Incoming HTTP/1.1 POST /api/v1/del request I0127 09:46:07.413788 3059 handler.go:203] delete port request {nvidia-kubevirt-gpu-dp-daemonset-l8s8b kube-system 365168001d880f7fa209472d1dc15694dc647b847601065d1c3fa7a039f377d7 /proc/19052/ns/net ovn } I0127 09:46:07.451800 3059 server.go:70] [2021-01-27T09:46:07Z] Outgoing response POST /api/v1/del with 204 status code in 38ms I0127 09:46:07.833678 3059 server.go:62] [2021-01-27T09:46:07Z] Incoming HTTP/1.1 POST /api/v1/add request I0127 09:46:07.833741 3059 handler.go:46] add port request {kube-ovn-pinger-kc5c4 kube-system 4e40756a0e85943cbc73da1289f0c2e3d06dcc49de3b6c6a10da4c7eec77e0ec /proc/24242/ns/net eth0 ovn } I0127 09:46:07.839226 3059 handler.go:104] create container interface eth0 mac 00:00:00:2E:7B:B5, ip 10.16.0.16/16,fd00:10:16::10/64, cidr 10.16.0.0/16,fd00:10:16::/64, gw 10.16.0.1,fd00:10:16::1 I0127 09:46:07.853141 3059 controller.go:292] route to del [fe80::/64] I0127 09:46:08.008371 3059 ovs.go:233] network ready after 1 ping, gw 10.16.0.1 E0127 09:46:09.411000 3059 gateway.go:164] check iptable rule exist failed, running [/usr/sbin/ip6tables -t nat -C POSTROUTING -m set ! --match-set ovn60subnets src -m set ! --match-set ovn60other-node src -m set --match-set ovn60subnets-nat dst -j RETURN --wait]: exit status 2: Bad argument ' Tryip6tables -h' or 'ip6tables --help' for more information. E0127 09:46:09.411057 3059 gateway.go:66] failed to set gw iptables I0127 09:46:10.143058 3059 ovs.go:233] network ready after 3 ping, gw fd00:10:16::1 I0127 09:46:10.143588 3059 server.go:70] [2021-01-27T09:46:10Z] Outgoing response POST /api/v1/add with 200 status code in 2309ms E0127 09:46:12.542341 3059 gateway.go:164] check iptable rule exist failed, running [/usr/sbin/ip6tables -t nat -C POSTROUTING -m set ! --match-set ovn60subnets src -m set ! --match-set ovn60other-node src -m set --match-set ovn60local-pod-ip-nat dst -j RETURN --wait]: exit status 2: Bad argument ' Tryip6tables -h' or 'ip6tables --help' for more information. E0127 09:46:12.542386 3059 gateway.go:66] failed to set gw iptables E0127 09:46:15.631603 3059 gateway.go:164] check iptable rule exist failed, running [/usr/sbin/ip6tables -t nat -C POSTROUTING -m set ! --match-set ovn60subnets src -m set ! --match-set ovn60other-node src -m set --match-set ovn60subnets-nat dst -j RETURN --wait]: exit status 2: Bad argument ' Tryip6tables -h' or 'ip6tables --help' for more information. E0127 09:46:15.631650 3059 gateway.go:66] failed to set gw iptables E0127 09:46:18.769318 3059 gateway.go:164] check iptable rule exist failed, running [/usr/sbin/ip6tables -t nat -C POSTROUTING -m set ! --match-set ovn60subnets src -m set ! --match-set ovn60other-node src -m set --match-set ovn60local-pod-ip-nat dst -j RETURN --wait]: exit status 2: Bad argument ' Tryip6tables -h' or 'ip6tables --help' for more information. E0127 09:46:18.769369 3059 gateway.go:66] failed to set gw iptables

+2 -2

0 comment

1 changed file

kylin-D-ZZ

pr closed time in 17 hours

created tagFidelityInternational/paas-tools-ruby

tag4.0.7

Debian based container with Ruby 2.5 and various tools installed

created time in 17 hours

push eventFidelityInternational/paas-tools-ruby

Rob Langley

commit sha 8eb4cfb497e97e1d881f9b9c56316bb54c045fa1

bump cf version and fly

view details

push time in 17 hours

issue openedorange-cloudfoundry/paas-templates

multi-region: remote-r2/remote 3 compilation networks misconfiguration

Expected behavior

  • As a paas-templates operator
  • In order to
  • I need
    • the compilation subnet specified in shared/secrets.yml to be respected

Observed behavior

The bosh compilation network definition simply reuses interco network, should define a separate compilation network, applying the compilation subnet specified in shared/secrets.yml

Affected releases

  • 49.0.0
  • 48.0.0

<!--

Traces and logs

Remember this is a public repo. DON'T leak credentials or Orange internal URLs. Automation may be applied in the future.

created time in 17 hours

issue openedorange-cloudfoundry/paas-templates

multi-region: COA debug profile induces compilation IP shortage

Expected behavior

  • As a paas-templates operator
  • In order to *
  • I need

Observed behavior

After a remote-rx/00-boostrap failure, or remote-rx/yy-deployment failure, we reach errors indicating no more ip available

Analysis

If the COA profile is activated , the bosh director keeps unreachable vms. Thus the IP are not freed at the end of the failed deployment. Next bosh deploy will use other ips in the subnet (NB: COA tries 2 time the bosh deploy, so each COA run consumes 2 ips)

Workaround

  • precompilation failure:
    • relaunch COA precompilation jobs
  • 00-bootstrap vpn failure:
    • option1 : simple actions to let bosh clean the compilation vms (eg: bosh stop;bosh start)
    • bosh delete-deployment -d 00-bootstrap. NB: this requires applying the workarounds #xxx

Affected releases

  • x.y
  • earlier versions

<!--

Traces and logs

Remember this is a public repo. DON'T leak credentials or Orange internal URLs. Automation may be applied in the future.

created time in 17 hours

PR opened alauda/kube-ovn

fix: ip6tables check error

kubectl logs -n kube-system kube-ovn-cni-2gxbr

E0127 09:46:06.286596 3059 gateway.go:66] failed to set gw iptables I0127 09:46:07.413709 3059 server.go:62] [2021-01-27T09:46:07Z] Incoming HTTP/1.1 POST /api/v1/del request I0127 09:46:07.413788 3059 handler.go:203] delete port request {nvidia-kubevirt-gpu-dp-daemonset-l8s8b kube-system 365168001d880f7fa209472d1dc15694dc647b847601065d1c3fa7a039f377d7 /proc/19052/ns/net ovn } I0127 09:46:07.451800 3059 server.go:70] [2021-01-27T09:46:07Z] Outgoing response POST /api/v1/del with 204 status code in 38ms I0127 09:46:07.833678 3059 server.go:62] [2021-01-27T09:46:07Z] Incoming HTTP/1.1 POST /api/v1/add request I0127 09:46:07.833741 3059 handler.go:46] add port request {kube-ovn-pinger-kc5c4 kube-system 4e40756a0e85943cbc73da1289f0c2e3d06dcc49de3b6c6a10da4c7eec77e0ec /proc/24242/ns/net eth0 ovn } I0127 09:46:07.839226 3059 handler.go:104] create container interface eth0 mac 00:00:00:2E:7B:B5, ip 10.16.0.16/16,fd00:10:16::10/64, cidr 10.16.0.0/16,fd00:10:16::/64, gw 10.16.0.1,fd00:10:16::1 I0127 09:46:07.853141 3059 controller.go:292] route to del [fe80::/64] I0127 09:46:08.008371 3059 ovs.go:233] network ready after 1 ping, gw 10.16.0.1 E0127 09:46:09.411000 3059 gateway.go:164] check iptable rule exist failed, running [/usr/sbin/ip6tables -t nat -C POSTROUTING -m set ! --match-set ovn60subnets src -m set ! --match-set ovn60other-node src -m set --match-set ovn60subnets-nat dst -j RETURN --wait]: exit status 2: Bad argument ' Tryip6tables -h' or 'ip6tables --help' for more information. E0127 09:46:09.411057 3059 gateway.go:66] failed to set gw iptables I0127 09:46:10.143058 3059 ovs.go:233] network ready after 3 ping, gw fd00:10:16::1 I0127 09:46:10.143588 3059 server.go:70] [2021-01-27T09:46:10Z] Outgoing response POST /api/v1/add with 200 status code in 2309ms E0127 09:46:12.542341 3059 gateway.go:164] check iptable rule exist failed, running [/usr/sbin/ip6tables -t nat -C POSTROUTING -m set ! --match-set ovn60subnets src -m set ! --match-set ovn60other-node src -m set --match-set ovn60local-pod-ip-nat dst -j RETURN --wait]: exit status 2: Bad argument ' Tryip6tables -h' or 'ip6tables --help' for more information. E0127 09:46:12.542386 3059 gateway.go:66] failed to set gw iptables E0127 09:46:15.631603 3059 gateway.go:164] check iptable rule exist failed, running [/usr/sbin/ip6tables -t nat -C POSTROUTING -m set ! --match-set ovn60subnets src -m set ! --match-set ovn60other-node src -m set --match-set ovn60subnets-nat dst -j RETURN --wait]: exit status 2: Bad argument ' Tryip6tables -h' or 'ip6tables --help' for more information. E0127 09:46:15.631650 3059 gateway.go:66] failed to set gw iptables E0127 09:46:18.769318 3059 gateway.go:164] check iptable rule exist failed, running [/usr/sbin/ip6tables -t nat -C POSTROUTING -m set ! --match-set ovn60subnets src -m set ! --match-set ovn60other-node src -m set --match-set ovn60local-pod-ip-nat dst -j RETURN --wait]: exit status 2: Bad argument ' Tryip6tables -h' or 'ip6tables --help' for more information. E0127 09:46:18.769369 3059 gateway.go:66] failed to set gw iptables

+2 -2

0 comment

1 changed file

pr created time in 17 hours

push eventavelino/awesome-go

Robert Catmull

commit sha ec3b9dbc6261150bbb83aed41a90837f9e584b7e

Adding go-workers to README.md (#3461)

view details

push time in 17 hours

PR merged avelino/awesome-go

Adding go-workers to README.md

Please check if what you want to add to awesome-go list meets quality standards before sending pull request. Thanks!

Please provide package links to:

  • github.com repo: https://github.com/catmullet/go-workers
  • pkg.go.dev: https://pkg.go.dev/github.com/catmullet/go-workers
  • goreportcard.com: https://goreportcard.com/report/github.com/catmullet/go-workers
  • coverage service link (codecov, coveralls, gocover etc.) https://gocover.io/github.com/catmullet/go-workers

Very good coverage

Note: that new categories can be added only when there are 3 packages or more.

Make sure that you've checked the boxes below before you submit PR:

  • [x] I have added my package in alphabetical order.
  • [x] I have an appropriate description with correct grammar.
  • [x] I know that this package was not listed before.
  • [x] I have added pkg.go.dev link to the repo and to my pull request.
  • [x] I have added coverage service link to the repo and to my pull request.
  • [x] I have added goreportcard link to the repo and to my pull request.
  • [x] I have read Contribution guidelines, maintainers note and Quality standard.

Thanks for your PR, you're awesome! :+1:

+1 -0

1 comment

1 changed file

catmullet

pr closed time in 17 hours

issue openedalauda/kube-ovn

after add networkpolicy ,cannot access nodeport application

after add networkpolicy ,cannot access nodeport application the ovn-controller log:

image

created time in 17 hours

more