profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/localvar/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

localvar/makeepub 87

A tool to help generate EPUB books from HTML files.

megaease/easegress-assemblyscript-sdk 17

AssemblyScript SDK for Easegress

localvar/zhuyin 15

A Go lang library for Chinese Zhuyin and Pinyin. 一个帮助处理中文注音和拼音的库,如把zhang1转换成zhāng或ㄓㄤ。

localvar/easegress 2

An all-rounder traffic orchestration system

localvar/legacy 2

backup of legacy projects, no maintenance, no support, no guarantee

benja-wu/easegress 1

An all-rounder traffic orchestration system

xxx7xxxx/easegress 1

An all-rounder traffic orchestration system

localvar/easemesh 0

A service mesh implementation for connecting, control, and observe services in spring-cloud.

localvar/go-metrics 0

Go port of Coda Hale's Metrics library

push eventlocalvar/easegress

Bomin Zhang

commit sha ac2d749e33d3f42eaa9c86eba616d8d94184e85f

update gjson to fix security issue

view details

push time in 2 hours

Pull request review commentmegaease/easegress

update gjson to fix security issue

 type ( 	//  1. Based on comparison between old and new part of entry. 	//  2. Based on comparison on entries with the same prefix. 	Informer interface {-		OnPartOfServiceSpec(serviceName string, gjsonPath GJSONPath, fn ServiceSpecFunc) error

they are not used, so I remove the dependency from informer.

localvar

comment created time in 2 hours

PullRequestReviewEvent

PR opened megaease/easegress

Reviewers
update gjson to fix security issue
+48 -92

0 comment

6 changed files

pr created time in 3 hours

create barnchlocalvar/easegress

branch : fix-security-issue

created branch time in 3 hours

Pull request review commentmegaease/easegress

performance tuning

  package codecounter +import "sync/atomic"+ // CodeCounter is the goroutine unsafe code counter.+// It is designed for counting http status code which is 1XX - 5XX,+// So the code range are limited to [0, 999] type CodeCounter struct {-	//      code:count-	counter map[int]uint64+	counter []uint64 }  // New creates a CodeCounter. func New() *CodeCounter { 	return &CodeCounter{-		counter: make(map[int]uint64),+		counter: make([]uint64, 1000), 	} }  // Count counts a new code. func (cc *CodeCounter) Count(code int) {-	cc.counter[code]+++	if code < 0 || code >= len(cc.counter) {+		// TODO: log? panic?+		return+	}+	atomic.AddUint64(&cc.counter[code], 1)+}++// Reset resets counters of all codes to zero+func (cc *CodeCounter) Reset() {+	for i := 0; i < len(cc.counter); i++ {+		cc.counter[i] = 0+	}

Thanks, I will rename it. For Reset, it is no need to use atomic.StoreUint64, as the comment for CodeCounter, it is not goroutine safe, we just ensure Count can be called concurrently is fine.

localvar

comment created time in a day

PullRequestReviewEvent

push eventlocalvar/easegress

u2386

commit sha f49e6379e4ad19281f9609b1acb5c6aa57e2441a

fix typos in eureka registry logs (#293)

view details

SU Chen

commit sha bb1cf77a57853dfe5a46112bc6811170f4fc0aa5

MQTT support (#203) * update go mod and sum for paho.mqtt.golang * mqtt proxy init, finish basic connect and publish * add kafka producer and update other details * add sub unsub keepalive * add session * spell and detail update * add auth and topic mapping from mqtt to kafka * add comment for exported type * impl broker send msg back to client * update github action * update session for yaml encode and decode * add sess store * fix data race caused by struct mutex and yaml marshal * load all store session when init and update topicMgr * update github aciton * add http endpoint * add resend pending msg * update resend * upadte * update topic manager for multi-pod deployment * update session manager for multi-pod deployment * add tls, http transfer, update topic lock * update log * fix bug * update * fix http transfer bug * update topic mapper * support wildcard and update topic mapper * udpate and fix bug * fix http infinite transfer data bug * fix analysis for git action * update go mod * update for github action * fix typo, add sha256, fix goroutine leak * optim design details * update storage * add test for mqtt proxy * add more test * improve test coverage * fix bug * add client test * add backend mock test * add broker test * improve test coverage * optim code * optim topic manager check speed * add test for topic manager * early stop for topic manager clear memory for remove * optim topic manager add comments to code * update test report msg * fix http transfer bug in k8s environment * update mqtt client disconnect error handling * update mqtt test * update mqtt test * update mqtt will msg and fix bug * fix mqtt will bug * optim code * fix bug between different auto test * fix goroutine leak bug * add debug log for mqttproxy * load mqtt session as needed, not load all at start * fix goroutine leak * optim mqtt debug mode for produce less meaningless logs * fix client take over get closed session bug * fix mqtt ticker bug * update debug log * replace tech name with more reasonable business name

view details

Bomin Zhang

commit sha 12f3379dda60a3f63cfae2274b9d2ce6fd21c937

bump up buger/jsonparser to fix security issue (#297)

view details

Bomin Zhang

commit sha bc90a63c1df6356e9e43d376f58bec1ed9559966

release v1.3.0 (#298)

view details

dependabot[bot]

commit sha a21a477e0e1d5976767c067df3a4ed1987a676bf

Bump knative.dev/client from 0.23.1 to 0.26.0 (#291) Bumps [knative.dev/client](https://github.com/knative/client) from 0.23.1 to 0.26.0. - [Release notes](https://github.com/knative/client/releases) - [Changelog](https://github.com/knative/client/blob/main/CHANGELOG.adoc) - [Commits](https://github.com/knative/client/compare/v0.23.1...v0.26.0) --- updated-dependencies: - dependency-name: knative.dev/client dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

view details

Bomin Zhang

commit sha 11e857abf70ee691dd0102eb7fb23741bb2bd839

add issue templates

view details

benjaminwu

commit sha 6cb8de318606608ac314ec78dcfa4a4825ed642f

[mesh] sidecar egress supports routing by request domain (#299) * [mesh] sidecar egress supports routing by request domain * Update pkg/object/meshcontroller/worker/egress.go Co-authored-by: Bomin Zhang <localvar@hotmail.com> * Update pkg/object/meshcontroller/worker/egress.go Co-authored-by: Bomin Zhang <localvar@hotmail.com> Co-authored-by: Bomin Zhang <localvar@hotmail.com>

view details

SU Chen

commit sha 5597ba9456cc5b286c36aef049e5c81212454cfb

add doc for mqtt, update makefile, readme, readme.zh, cookbook/readme (#300) * add doc for mqtt, update makefile, readme, readme.zh, cookbook/readme * fix order of cookbook readme based on alphabet order * update mqtt doc and readme user case * fix README.zh-CN.md label

view details

SU Chen

commit sha c0f5200d54547d165c98a8e1ce8f583f29489858

rm mqtt client from broker when close, fix double close chan when broker and client close at same time (#302)

view details

Bomin Zhang

commit sha 265d91abae84b2b5ff7453e733c506a752c56121

avoid escaping json schema and fix bugs related to multi-level dynamic object (#272) * fix multi-level custom resource json marshal error * avoid escaping json schema * use the correct yaml package

view details

Bomin Zhang

commit sha 78c95cb2554223155d2d6c2517609a8bf857c926

improve docker image build speed (#306) * use dedicated golang image as compiler * avoid copying source code * avoid downloading dependencies * enable golang build cache * avoid scanning large folders

view details

Nevill

commit sha 7c1b6d3c7d44c5d6783f6d36e26a3a0b16122c67

change the logger level from Error to Warn, fix #296 (#307)

view details

SU Chen

commit sha 69ef753b8855dce1930b38ae4c4d97299a69a14b

MQTT performance optimization (#304) * mqtt performance optim * rm unnecessary defer * update github action * fix mqtt wildcard # subscribe edge case * add more test for mqtt topic * fix typo, drop qos0 msg when sending channel is full * update error msg * mqtt clean session and topicMgr when close client connection * add test for clean session in mqtt

view details

benjaminwu

commit sha 514ba09e223de26427741e5bddd389808c40fbc5

[mesh] enable mTLS (#281) * [mesh] add cert manager * [mesh] fix conflict * [mesh] add define * [mesh] add ingress controller cert handling * [mesh] fix autotest failure * [mesh] refactor and raise coverage rate * [mesh] fix provider bug * [mesh] fix mTLS cert bug * Apply suggestions from localvar's code review Co-authored-by: Bomin Zhang <localvar@hotmail.com> * [mesh] refactor according to localvar's comments * [mesh] raise coverage rate * Update pkg/filter/proxy/proxy.go Co-authored-by: Hao Chen <chenhaox@gmail.com> * [mesh] update with haoel's comments * [mesh] refactor informer server and ingresscontroller onCert func * [mesh] fix typo * [mesh] fix ingress httpserver bug and update spec test * [mesh] solve conflict * [mesh] fix warning and bug * [mesh] fix typo * [mesh] fix init bug and add more info log * [mesh] fix mTLS bugs * [mesh] fix auto test * [mesh] ingress/egress use its own informer * [mesh] add closing informer in ingress/egress * [mesh] udpate according to yunlong's comments Co-authored-by: Bomin Zhang <localvar@hotmail.com> Co-authored-by: Hao Chen <chenhaox@gmail.com>

view details

Yun Long

commit sha 96dd37d42d08c46c2bfc77329a8f6f161e047e98

[mesh]: Keep registering spec & fix some style (#311)

view details

gw123

commit sha 94c8917cf79321dd840546000ba943759b9eb6a2

fix eureka bug (#309) * fix eureka bug * fix eureka https port bug

view details

SU Chen

commit sha cbbcea1456b693ae0e1771896dab9d480dd2833c

release v1.3.1 (#312) * release v1.3.1

view details

^_^void

commit sha d7e55e93306717ae8620e795e3240239d0f41652

Update hostfunc.go (#317)

view details

Yun Long

commit sha 6797c9afbcd6fcba5284a03ec6921614373a31b9

Add docker tag for makefile (#313)

view details

invzhi

commit sha c6a3d760fb18dd818dbe4d900abab9365cb666a9

Tiny tweaks for doc & code (#314) * [doc]: tweaks README.zh-CN.md fmt * tweaks for code readable * Fix typos & remove redundant code * Fix wrong code change

view details

push time in a day

issue openedmegaease/easegress

high CPU usage in mesh deployment

Describe the bug high CPU usage in mesh deployment

To Reproduce

  1. Deploy mesh control plane
  2. Deploy mesh services
  3. The CPU usage is high in both the control plane and sidecar

created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package tcpproxy++import (+	"fmt"+	"net"+	"sync"++	"github.com/megaease/easegress/pkg/logger"+	"github.com/megaease/easegress/pkg/util/limitlistener"+)++// ListenerState listener running state+type ListenerState int++type listener struct {+	name      string+	localAddr string        // listen addr+	state     ListenerState // listener state++	mutex    *sync.Mutex+	stopChan chan struct{}+	maxConns uint32 // maxConn for tcp listener++	tcpListener *limitlistener.LimitListener                    // tcp listener with accept limit+	onAccept    func(conn net.Conn, listenerStop chan struct{}) // tcp accept handle+}++func newListener(spec *Spec, onAccept func(conn net.Conn, listenerStop chan struct{})) *listener {+	listen := &listener{+		name:      spec.Name,+		localAddr: fmt.Sprintf(":%d", spec.Port),++		onAccept: onAccept,+		maxConns: spec.MaxConnections,++		mutex:    &sync.Mutex{},+		stopChan: make(chan struct{}),+	}+	return listen+}++func (l *listener) listen() error {+	tl, err := net.Listen("tcp", l.localAddr)+	if err != nil {+		return err+	}+	// wrap tcp listener with accept limit+	l.tcpListener = limitlistener.NewLimitListener(tl, l.maxConns)+	return nil+}++func (l *listener) acceptEventLoop() {++	for {+		if tconn, err := l.tcpListener.Accept(); err != nil {+			if nerr, ok := err.(net.Error); ok && nerr.Timeout() {+				logger.Infof("tcp listener(%s) stop accept connection due to timeout, err: %s",+					l.localAddr, nerr)+				return+			}++			if ope, ok := err.(*net.OpError); ok {+				// not timeout error and not temporary, which means the error is non-recoverable+				if !(ope.Timeout() && ope.Temporary()) {+					// accept error raised by sockets closing+					if ope.Op == "accept" {+						logger.Debugf("tcp listener(%s) stop accept connection due to listener closed", l.localAddr)+					} else {+						logger.Errorf("tcp listener(%s) stop accept connection due to non-recoverable error: %s",+							l.localAddr, err.Error())+					}+					return+				}+			} else {+				logger.Errorf("tcp listener(%s) stop accept connection with unknown error: %s.",+					l.localAddr, err.Error())+			}+		} else {+			go l.onAccept(tconn, l.stopChan)+		}

we can refactor to void deep nesting, like check err == nil first and use continue. this could make the code more readable.

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package tcpproxy++import (+	"errors"+	"io"+	"net"+	"reflect"+	"runtime/debug"+	"sync"+	"sync/atomic"+	"time"++	"github.com/megaease/easegress/pkg/logger"+	"github.com/megaease/easegress/pkg/util/iobufferpool"+	"github.com/megaease/easegress/pkg/util/timerpool"+)++// Connection wrap tcp connection+type Connection struct {+	rawConn   net.Conn+	connected uint32+	closed    uint32++	localAddr  net.Addr+	remoteAddr net.Addr++	lastBytesSizeRead  int64+	lastWriteSizeWrite int64++	readBuffer      iobufferpool.IoBuffer+	writeBuffers    net.Buffers+	ioBuffers       []iobufferpool.IoBuffer+	writeBufferChan chan *[]iobufferpool.IoBuffer++	mu               sync.Mutex+	startOnce        sync.Once+	connStopChan     chan struct{} // use for connection close+	listenerStopChan chan struct{} // use for listener close++	onRead  func(buffer iobufferpool.IoBuffer) // execute read filters+	onClose func(event ConnectionEvent)+}++// NewDownstreamConn wrap connection create from client+// @param remoteAddr client addr for udp proxy use+func NewDownstreamConn(conn net.Conn, remoteAddr net.Addr, listenerStopChan chan struct{}) *Connection {+	clientConn := &Connection{+		connected:  1,+		rawConn:    conn,+		localAddr:  conn.LocalAddr(),+		remoteAddr: conn.RemoteAddr(),++		writeBufferChan: make(chan *[]iobufferpool.IoBuffer, 8),++		mu:               sync.Mutex{},+		connStopChan:     make(chan struct{}),+		listenerStopChan: listenerStopChan,+	}++	if remoteAddr != nil {+		clientConn.remoteAddr = remoteAddr+	} else {+		clientConn.remoteAddr = conn.RemoteAddr() // udp server rawConn can not get remote address+	}+	return clientConn+}++// LocalAddr get connection local addr+func (c *Connection) LocalAddr() net.Addr {+	return c.localAddr+}++// RemoteAddr get connection remote addr(it's nil for udp server rawConn)+func (c *Connection) RemoteAddr() net.Addr {+	return c.rawConn.RemoteAddr()+}++// SetOnRead set connection read handle+func (c *Connection) SetOnRead(onRead func(buffer iobufferpool.IoBuffer)) {+	c.onRead = onRead+}++// OnRead set data read callback+func (c *Connection) OnRead(buffer iobufferpool.IoBuffer) {+	c.onRead(buffer)+}++// SetOnClose set close callback+func (c *Connection) SetOnClose(onclose func(event ConnectionEvent)) {+	c.onClose = onclose+}++// GetReadBuffer get connection red buffer+func (c *Connection) GetReadBuffer() iobufferpool.IoBuffer {+	return c.readBuffer+}++// Start running connection read/write loop+func (c *Connection) Start() {+	c.startOnce.Do(func() {+		c.startRWLoop()+	})+}++// State get connection running state+func (c *Connection) State() ConnState {+	if atomic.LoadUint32(&c.closed) == 1 {+		return ConnClosed+	}+	if atomic.LoadUint32(&c.connected) == 1 {+		return ConnActive+	}+	return ConnInit+}++// GoWithRecover wraps a `go func()` with recover()+func (c *Connection) goWithRecover(handler func(), recoverHandler func(r interface{})) {+	go func() {+		defer func() {+			if r := recover(); r != nil {+				logger.Errorf("tcp connection goroutine panic: %v\n%s\n", r, string(debug.Stack()))+				if recoverHandler != nil {+					go func() {+						defer func() {+							if p := recover(); p != nil {+								logger.Errorf("tcp connection goroutine panic: %v\n%s\n", p, string(debug.Stack()))+							}+						}()+						recoverHandler(r)+					}()+				}+			}+		}()+		handler()+	}()+}++func (c *Connection) startRWLoop() {+	c.goWithRecover(func() {+		c.startReadLoop()+	}, func(r interface{}) {+		_ = c.Close(NoFlush, LocalClose)+	})++	c.goWithRecover(func() {+		c.startWriteLoop()+	}, func(r interface{}) {+		_ = c.Close(NoFlush, LocalClose)+	})+}++// Write receive other connection data+func (c *Connection) Write(buffers ...iobufferpool.IoBuffer) (err error) {+	defer func() {+		if r := recover(); r != nil {+			logger.Errorf("tcp connection has closed, local addr: %s, remote addr: %s, err: %+v",+				c.localAddr.String(), c.remoteAddr.String(), r)+			err = ErrConnectionHasClosed+		}+	}()++	select {+	case c.writeBufferChan <- &buffers:+		return+	default:+	}++	// try to send data again in 60 seconds+	t := timerpool.Get(60 * time.Second)+	select {+	case c.writeBufferChan <- &buffers:+	case <-t.C:+		err = ErrWriteBufferChanTimeout+	}+	timerpool.Put(t)+	return+}++func (c *Connection) startReadLoop() {+	for {+		select {+		case <-c.connStopChan:+			return+		case <-c.listenerStopChan:+			return+		default:+			err := c.doReadIO()+			if err != nil {+				if te, ok := err.(net.Error); ok && te.Timeout() {+					if c.readBuffer != nil && c.readBuffer.Len() == 0 && c.readBuffer.Cap() > iobufferpool.DefaultBufferReadCapacity {+						c.readBuffer.Free()+						c.readBuffer.Alloc(iobufferpool.DefaultBufferReadCapacity)+					}+					continue+				}++				// normal close or health check+				if c.lastBytesSizeRead == 0 || err == io.EOF {+					logger.Infof("tcp connection error on read, local addr: %s, remote addr: %s, err: %s",+						c.localAddr.String(), c.remoteAddr.String(), err.Error())+				} else {+					logger.Errorf("tcp connection error on read, local addr: %s, remote addr: %s, err: %s",+						c.localAddr.String(), c.remoteAddr.String(), err.Error())+				}++				if err == io.EOF {+					_ = c.Close(NoFlush, RemoteClose)+				} else {+					_ = c.Close(NoFlush, OnReadErrClose)+				}+				return+			}+		}+	}+}++func (c *Connection) startWriteLoop() {+	var err error+	for {+		select {+		case <-c.listenerStopChan:+			return+		case buf, ok := <-c.writeBufferChan:+			if !ok {+				return+			}+			c.appendBuffer(buf)+		OUTER:+			for i := 0; i < 8; i++ {+				select {+				case buf, ok := <-c.writeBufferChan:+					if !ok {+						return+					}+					c.appendBuffer(buf)+				default:+					break OUTER+				}+			}++			_ = c.rawConn.SetWriteDeadline(time.Now().Add(15 * time.Second))+			_, err = c.doWrite()+		}++		if err != nil {+			if err == iobufferpool.ErrEOF {+				logger.Debugf("tcp connection local close with eof, local addr: %s, remote addr: %s",+					c.localAddr.String(), c.remoteAddr.String())+				_ = c.Close(NoFlush, LocalClose)+			} else {+				logger.Errorf("tcp connection error on write, local addr: %s, remote addr: %s, err: %+v",+					c.localAddr.String(), c.remoteAddr.String(), err)+			}++			if te, ok := err.(net.Error); ok && te.Timeout() {+				_ = c.Close(NoFlush, OnWriteTimeout)+			}+			//other write errs not close connection, because readbuffer may have unread data, wait for readloop close connection,+			return+		}+	}+}++func (c *Connection) appendBuffer(ioBuffers *[]iobufferpool.IoBuffer) {+	if ioBuffers == nil {+		return+	}+	for _, buf := range *ioBuffers {+		if buf == nil {+			continue+		}+		c.ioBuffers = append(c.ioBuffers, buf)+		c.writeBuffers = append(c.writeBuffers, buf.Bytes())+	}+}++// Close connection close function+func (c *Connection) Close(ccType CloseType, event ConnectionEvent) (err error) {+	defer func() {+		if r := recover(); r != nil {+			logger.Errorf("tcp connection close panic, err: %+v\n%s", r, string(debug.Stack()))+		}+	}()++	if ccType == FlushWrite {+		_ = c.Write(iobufferpool.NewIoBufferEOF())+		return nil+	}++	if !atomic.CompareAndSwapUint32(&c.closed, 0, 1) {

The use of CompareAndSwap like this has issues in most cases and cannot guarantee goroutine safe.

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package udpproxy++import (+	"fmt"+	"net"+	"sync/atomic"+	"time"++	"github.com/megaease/easegress/pkg/logger"+	"github.com/megaease/easegress/pkg/util/iobufferpool"+)++type session struct {+	upstreamAddr          string+	downstreamAddr        *net.UDPAddr+	downstreamIdleTimeout time.Duration+	upstreamIdleTimeout   time.Duration++	upstreamConn net.Conn+	writeBuf     chan *iobufferpool.IoBuffer+	stopChan     chan struct{}+	stopped      uint32+}++func newSession(downstreamAddr *net.UDPAddr, upstreamAddr string, upstreamConn net.Conn,+	downstreamIdleTimeout, upstreamIdleTimeout time.Duration) *session {+	s := session{+		upstreamAddr:          upstreamAddr,+		downstreamAddr:        downstreamAddr,+		upstreamConn:          upstreamConn,+		upstreamIdleTimeout:   upstreamIdleTimeout,+		downstreamIdleTimeout: downstreamIdleTimeout,++		writeBuf: make(chan *iobufferpool.IoBuffer, 512),+		stopChan: make(chan struct{}, 1),+	}++	go func() {+		var t *time.Timer+		var idleCheck <-chan time.Time++		if downstreamIdleTimeout > 0 {+			t = time.NewTimer(downstreamIdleTimeout)+			idleCheck = t.C+		}++		for {+			select {+			case <-idleCheck:+				s.Close()+			case buf, ok := <-s.writeBuf:+				if !ok {+					s.Close()+					continue+				}++				if t != nil {+					if !t.Stop() {+						<-t.C+					}+					t.Reset(downstreamIdleTimeout)+				}++				bufLen := (*buf).Len()+				n, err := s.upstreamConn.Write((*buf).Bytes())+				_ = iobufferpool.PutIoBuffer(*buf)++				if err != nil {+					logger.Errorf("udp connection flush data to upstream(%s) failed, err: %+v", upstreamAddr, err)+					s.cleanWriteBuf()+					break+				}++				if bufLen != n {+					logger.Errorf("udp connection flush data to upstream(%s) failed, should write %d but written %d",+						upstreamAddr, bufLen, n)+					s.cleanWriteBuf()+					break+				}++			case <-s.stopChan:+				if !atomic.CompareAndSwapUint32(&s.stopped, 0, 1) {

s.stopped is used as a lock, but the use of CompareAndSwap like this won't work in most cases. please check the below comment to the behavior is as expected.

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package tcpproxy

could we reuse this file and avoid duplication in UDP?

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package tcpproxy

this file and the UDP counterpart are nearly identical, can we avoid the duplication?

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package udpproxy++import (+	"fmt"+	"net"+	"sync/atomic"+	"time"++	"github.com/megaease/easegress/pkg/logger"+	"github.com/megaease/easegress/pkg/util/iobufferpool"+)++type session struct {+	upstreamAddr          string+	downstreamAddr        *net.UDPAddr+	downstreamIdleTimeout time.Duration+	upstreamIdleTimeout   time.Duration++	upstreamConn net.Conn+	writeBuf     chan *iobufferpool.IoBuffer+	stopChan     chan struct{}+	stopped      uint32+}++func newSession(downstreamAddr *net.UDPAddr, upstreamAddr string, upstreamConn net.Conn,+	downstreamIdleTimeout, upstreamIdleTimeout time.Duration) *session {+	s := session{+		upstreamAddr:          upstreamAddr,+		downstreamAddr:        downstreamAddr,+		upstreamConn:          upstreamConn,+		upstreamIdleTimeout:   upstreamIdleTimeout,+		downstreamIdleTimeout: downstreamIdleTimeout,++		writeBuf: make(chan *iobufferpool.IoBuffer, 512),+		stopChan: make(chan struct{}, 1),+	}++	go func() {+		var t *time.Timer+		var idleCheck <-chan time.Time++		if downstreamIdleTimeout > 0 {+			t = time.NewTimer(downstreamIdleTimeout)+			idleCheck = t.C+		}++		for {+			select {+			case <-idleCheck:+				s.Close()+			case buf, ok := <-s.writeBuf:+				if !ok {+					s.Close()+					continue+				}++				if t != nil {+					if !t.Stop() {+						<-t.C+					}+					t.Reset(downstreamIdleTimeout)+				}++				bufLen := (*buf).Len()+				n, err := s.upstreamConn.Write((*buf).Bytes())+				_ = iobufferpool.PutIoBuffer(*buf)++				if err != nil {+					logger.Errorf("udp connection flush data to upstream(%s) failed, err: %+v", upstreamAddr, err)+					s.cleanWriteBuf()+					break+				}++				if bufLen != n {+					logger.Errorf("udp connection flush data to upstream(%s) failed, should write %d but written %d",+						upstreamAddr, bufLen, n)+					s.cleanWriteBuf()+					break+				}++			case <-s.stopChan:+				if !atomic.CompareAndSwapUint32(&s.stopped, 0, 1) {+					break+				}+				if t != nil {+					t.Stop()+				}+				_ = s.upstreamConn.Close()+				s.cleanWriteBuf()+			}+		}+	}()++	return &s+}++// Write send data to buffer channel, wait flush to upstream+func (s *session) Write(buf *iobufferpool.IoBuffer) error {+	if atomic.LoadUint32(&s.stopped) == 1 {+		return fmt.Errorf("udp connection from %s to %s has closed", s.downstreamAddr.String(), s.upstreamAddr)+	}++	select {

what will happen if s.stopped becomes 1 here?

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package iobufferpool++import (+	"encoding/binary"+	"errors"+	"io"+	"sync"+	"sync/atomic"+)++const (+	// AutoExpand auto expand io buffer+	AutoExpand      = -1+	MinRead         = 1 << 9+	MaxRead         = 1 << 17+	ResetOffMark    = -1+	DefaultSize     = 1 << 4+	MaxBufferLength = 1 << 20+	MaxThreshold    = 1 << 22+)++var nullByte []byte++var (+	// ErrEOF io buffer eof sign+	ErrEOF = errors.New("EOF")+	// ErrInvalidWriteCount io buffer: invalid write count+	ErrInvalidWriteCount = errors.New("io buffer: invalid write count")+)++type pipe struct {+	IoBuffer+	mu sync.Mutex+	c  sync.Cond++	err error+}++func (p *pipe) Len() int {+	p.mu.Lock()+	defer p.mu.Unlock()+	if p.IoBuffer == nil {+		return 0+	}+	return p.IoBuffer.Len()+}++// Read waits until data is available and copies bytes+// from the buffer into p.+func (p *pipe) Read(d []byte) (n int, err error) {+	p.mu.Lock()+	defer p.mu.Unlock()+	if p.c.L == nil {+		p.c.L = &p.mu+	}+	for {+		if p.IoBuffer != nil && p.IoBuffer.Len() > 0 {+			return p.IoBuffer.Read(d)+		}+		if p.err != nil {+			return 0, p.err+		}+		p.c.Wait()+	}+}++var errClosedPipeWrite = errors.New("write on closed buffer")++// Write copies bytes from p into the buffer and wakes a reader.+// It is an error to write more data than the buffer can hold.+func (p *pipe) Write(d []byte) (n int, err error) {+	p.mu.Lock()+	defer p.mu.Unlock()+	if p.c.L == nil {+		p.c.L = &p.mu+	}+	defer p.c.Signal()+	if p.err != nil {+		return 0, errClosedPipeWrite+	}+	return len(d), p.IoBuffer.Append(d)+}++// CloseWithError causes the next Read (waking up a current blocked+// Read if needed) to return the provided err after all data has been+// read.+//+// The error must be non-nil.+func (p *pipe) CloseWithError(err error) {+	if err == nil {+		err = io.EOF+	}+	p.mu.Lock()+	defer p.mu.Unlock()+	if p.c.L == nil {+		p.c.L = &p.mu+	}+	p.err = err+	defer p.c.Signal()+}++// NewPipeBuffer create pipe buffer with fixed capacity+func NewPipeBuffer(capacity int) IoBuffer {+	return &pipe{+		IoBuffer: newIoBuffer(capacity),+	}+}++// ioBuffer is an implementation of IoBuffer+type ioBuffer struct {+	buf     []byte // contents: buf[off : len(buf)]+	off     int    // read from &buf[off], write to &buf[len(buf)]+	offMark int+	count   int32+	eof     bool++	b *[]byte+}++func newIoBuffer(capacity int) IoBuffer {+	buffer := &ioBuffer{+		offMark: ResetOffMark,+		count:   1,+	}+	if capacity <= 0 {+		capacity = DefaultSize+	}+	buffer.b = GetBytes(capacity)+	buffer.buf = (*buffer.b)[:0]+	return buffer+}++// NewIoBufferString new io buffer with string+func NewIoBufferString(s string) IoBuffer {+	if s == "" {+		return newIoBuffer(0)+	}+	return &ioBuffer{+		buf:     []byte(s),+		offMark: ResetOffMark,+		count:   1,+	}+}++// NewIoBufferBytes new io buffer with bytes array+func NewIoBufferBytes(bytes []byte) IoBuffer {+	if bytes == nil {+		return NewIoBuffer(0)+	}+	return &ioBuffer{+		buf:     bytes,+		offMark: ResetOffMark,+		count:   1,+	}+}++// NewIoBufferEOF new io buffer with eof sign+func NewIoBufferEOF() IoBuffer {+	buf := newIoBuffer(0)+	buf.SetEOF(true)+	return buf+}++func (b *ioBuffer) Read(p []byte) (n int, err error) {+	if b.off >= len(b.buf) {+		b.Reset()++		if len(p) == 0 {+			return+		}++		return 0, io.EOF+	}++	n = copy(p, b.buf[b.off:])+	b.off += n++	return+}++func (b *ioBuffer) Grow(n int) error {+	_, ok := b.tryGrowByReslice(n)++	if !ok {+		b.grow(n)+	}++	return nil+}++func (b *ioBuffer) ReadOnce(r io.Reader) (n int64, err error) {+	var m int++	if b.off > 0 && b.off >= len(b.buf) {+		b.Reset()+	}++	if b.off >= (cap(b.buf) - len(b.buf)) {+		b.copy(0)+	}++	// free max buffers avoid memory leak+	if b.off == len(b.buf) && cap(b.buf) > MaxBufferLength {+		b.Free()+		b.Alloc(MaxRead)+	}++	l := cap(b.buf) - len(b.buf)++	m, err = r.Read(b.buf[len(b.buf):cap(b.buf)])++	b.buf = b.buf[0 : len(b.buf)+m]+	n = int64(m)++	// Not enough space anywhere, we need to allocate.+	if l == m {+		b.copy(AutoExpand)+	}++	return n, err+}++func (b *ioBuffer) ReadFrom(r io.Reader) (n int64, err error) {+	if b.off >= len(b.buf) {+		b.Reset()+	}++	for {+		if free := cap(b.buf) - len(b.buf); free < MinRead {+			// not enough space at end+			if b.off+free < MinRead {+				// not enough space using beginning of buffer;+				// double buffer capacity+				b.copy(MinRead)+			} else {+				b.copy(0)+			}+		}++		m, e := r.Read(b.buf[len(b.buf):cap(b.buf)])++		b.buf = b.buf[0 : len(b.buf)+m]+		n += int64(m)++		if e == io.EOF {+			break+		}++		if m == 0 {+			break+		}++		if e != nil {+			return n, e+		}+	}++	return+}++func (b *ioBuffer) Write(p []byte) (n int, err error) {+	m, ok := b.tryGrowByReslice(len(p))++	if !ok {+		m = b.grow(len(p))+	}++	return copy(b.buf[m:], p), nil+}++func (b *ioBuffer) WriteString(s string) (n int, err error) {+	m, ok := b.tryGrowByReslice(len(s))++	if !ok {+		m = b.grow(len(s))+	}++	return copy(b.buf[m:], s), nil+}++func (b *ioBuffer) tryGrowByReslice(n int) (int, bool) {+	if l := len(b.buf); l+n <= cap(b.buf) {+		b.buf = b.buf[:l+n]++		return l, true+	}++	return 0, false+}++func (b *ioBuffer) grow(n int) int {+	m := b.Len()++	// If buffer is empty, reset to recover space.+	if m == 0 && b.off != 0 {+		b.Reset()+	}++	// Try to grow by means of a reslice.+	if i, ok := b.tryGrowByReslice(n); ok {+		return i+	}++	if m+n <= cap(b.buf)/2 {+		// We can slide things down instead of allocating a new+		// slice. We only need m+n <= cap(b.buf) to slide, but+		// we instead let capacity get twice as large so we+		// don't spend all our time copying.+		b.copy(0)+	} else {+		// Not enough space anywhere, we need to allocate.+		b.copy(n)+	}++	// Restore b.off and len(b.buf).+	b.off = 0+	b.buf = b.buf[:m+n]++	return m+}++func (b *ioBuffer) WriteTo(w io.Writer) (n int64, err error) {+	for b.off < len(b.buf) {+		nBytes := b.Len()+		m, e := w.Write(b.buf[b.off:])++		if m > nBytes {+			panic(ErrInvalidWriteCount)+		}++		b.off += m+		n += int64(m)++		if e != nil {+			return n, e+		}++		if m == 0 || m == nBytes {+			return n, nil+		}+	}++	return+}++func (b *ioBuffer) WriteByte(p byte) error {+	m, ok := b.tryGrowByReslice(1)++	if !ok {+		m = b.grow(1)+	}++	b.buf[m] = p+	return nil+}++func (b *ioBuffer) WriteUint16(p uint16) error {+	m, ok := b.tryGrowByReslice(2)++	if !ok {+		m = b.grow(2)+	}++	binary.BigEndian.PutUint16(b.buf[m:], p)+	return nil+}++func (b *ioBuffer) WriteUint32(p uint32) error {+	m, ok := b.tryGrowByReslice(4)++	if !ok {+		m = b.grow(4)+	}++	binary.BigEndian.PutUint32(b.buf[m:], p)+	return nil+}++func (b *ioBuffer) WriteUint64(p uint64) error {+	m, ok := b.tryGrowByReslice(8)++	if !ok {+		m = b.grow(8)+	}++	binary.BigEndian.PutUint64(b.buf[m:], p)+	return nil+}++func (b *ioBuffer) Append(data []byte) error {+	if b.off >= len(b.buf) {+		b.Reset()+	}++	dataLen := len(data)++	if free := cap(b.buf) - len(b.buf); free < dataLen {+		// not enough space at end+		if b.off+free < dataLen {+			// not enough space using beginning of buffer;+			// double buffer capacity+			b.copy(dataLen)+		} else {+			b.copy(0)+		}+	}++	m := copy(b.buf[len(b.buf):len(b.buf)+dataLen], data)+	b.buf = b.buf[0 : len(b.buf)+m]++	return nil+}++func (b *ioBuffer) AppendByte(data byte) error {+	return b.Append([]byte{data})+}++func (b *ioBuffer) Peek(n int) []byte {+	if len(b.buf)-b.off < n {+		return nil+	}++	return b.buf[b.off : b.off+n]+}++func (b *ioBuffer) Mark() {+	b.offMark = b.off+}++func (b *ioBuffer) Restore() {+	if b.offMark != ResetOffMark {+		b.off = b.offMark+		b.offMark = ResetOffMark+	}+}++func (b *ioBuffer) Bytes() []byte {+	return b.buf[b.off:]+}++func (b *ioBuffer) Cut(offset int) IoBuffer {+	if b.off+offset > len(b.buf) {+		return nil+	}++	buf := make([]byte, offset)++	copy(buf, b.buf[b.off:b.off+offset])+	b.off += offset+	b.offMark = ResetOffMark++	return &ioBuffer{+		buf: buf,+		off: 0,+	}+}++func (b *ioBuffer) Drain(offset int) {+	if b.off+offset > len(b.buf) {+		return+	}++	b.off += offset+	b.offMark = ResetOffMark+}++func (b *ioBuffer) String() string {+	return string(b.buf[b.off:])+}++func (b *ioBuffer) Len() int {+	return len(b.buf) - b.off+}++func (b *ioBuffer) Cap() int {+	return cap(b.buf)+}++func (b *ioBuffer) Reset() {+	b.buf = b.buf[:0]+	b.off = 0+	b.offMark = ResetOffMark+	b.eof = false+}++func (b *ioBuffer) available() int {+	return len(b.buf) - b.off+}++func (b *ioBuffer) Clone() IoBuffer {+	buf := GetIoBuffer(b.Len())+	buf.Write(b.Bytes())++	buf.SetEOF(b.EOF())++	return buf+}++func (b *ioBuffer) Free() {+	b.Reset()+	b.giveSlice()+}++func (b *ioBuffer) Alloc(size int) {+	if b.buf != nil {+		b.Free()+	}+	if size <= 0 {+		size = DefaultSize+	}+	b.b = b.makeSlice(size)+	b.buf = *b.b+	b.buf = b.buf[:0]+}++func (b *ioBuffer) Count(count int32) int32 {+	return atomic.AddInt32(&b.count, count)+}++func (b *ioBuffer) EOF() bool {+	return b.eof+}++func (b *ioBuffer) SetEOF(eof bool) {+	b.eof = eof+}++//The expand parameter means the following:+//A, if expand > 0, cap(newbuf) is calculated according to cap(oldbuf) and expand.+//B, if expand == AutoExpand, cap(newbuf) is calculated only according to cap(oldbuf).+//C, if expand == 0, only copy, buf not be expanded.+func (b *ioBuffer) copy(expand int) {+	var newBuf []byte+	var bufp *[]byte++	if expand > 0 || expand == AutoExpand {+		cap := cap(b.buf)+		// when buf cap greater than MaxThreshold, start Slow Grow.+		if cap < 2*MinRead {+			cap = 2 * MinRead+		} else if cap < MaxThreshold {+			cap = 2 * cap+		} else {+			cap = cap + cap/4+		}++		if expand == AutoExpand {+			expand = 0+		}++		bufp = b.makeSlice(cap + expand)+		newBuf = *bufp+		copy(newBuf, b.buf[b.off:])+		PutBytes(b.b)+		b.b = bufp+	} else {+		newBuf = b.buf+		copy(newBuf, b.buf[b.off:])+	}+	b.buf = newBuf[:len(b.buf)-b.off]+	b.off = 0+}++func (b *ioBuffer) makeSlice(n int) *[]byte {

ioBuffer should not provide this function?

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package udpproxy++import (+	"fmt"+	"net"+	"reflect"+	"sync"+	"sync/atomic"+	"time"++	"github.com/megaease/easegress/pkg/logger"+	"github.com/megaease/easegress/pkg/supervisor"+	"github.com/megaease/easegress/pkg/util/iobufferpool"+)++const (+	checkFailedTimeout = 10 * time.Second++	stateNil     stateType = "nil"+	stateFailed  stateType = "failed"+	stateRunning stateType = "running"+	stateClosed  stateType = "closed"+)++type (+	stateType string++	eventCheckFailed struct{}+	eventServeFailed struct {+		startNum uint64+		err      error+	}++	eventReload struct {+		nextSuperSpec *supervisor.Spec+	}+	eventClose struct{ done chan struct{} }++	runtime struct {+		superSpec *supervisor.Spec+		spec      *Spec++		startNum   uint64+		pool       *pool        // backend servers pool+		serverConn *net.UDPConn // listener+		sessions   map[string]*session++		state     atomic.Value     // runtime running state+		eventChan chan interface{} // receive event+		ipFilters *ipFilters++		mu sync.Mutex+	}+)++func newRuntime(superSpec *supervisor.Spec) *runtime {+	spec := superSpec.ObjectSpec().(*Spec)+	r := &runtime{+		superSpec: superSpec,++		pool:      newPool(superSpec.Super(), spec.Pool, ""),+		ipFilters: newIPFilters(spec.IPFilter),++		eventChan: make(chan interface{}, 10),+		sessions:  make(map[string]*session),+	}++	r.setState(stateNil)++	go r.fsm()+	go r.checkFailed()+	return r+}++// FSM is the finite-state-machine for the runtime.+func (r *runtime) fsm() {+	ticker := time.NewTicker(2 * time.Second)+	for {+		select {+		case <-ticker.C:+			r.cleanup()+		case e := <-r.eventChan:+			switch e := e.(type) {+			case *eventCheckFailed:+				r.handleEventCheckFailed()+			case *eventServeFailed:+				r.handleEventServeFailed(e)+			case *eventReload:+				r.handleEventReload(e)+			case *eventClose:+				ticker.Stop()+				r.handleEventClose(e)+				// NOTE: We don't close hs.eventChan,+				// in case of panic of any other goroutines+				// to send event to it later.+				return+			default:+				logger.Errorf("BUG: unknown event: %T\n", e)+			}+		}+	}+}++func (r *runtime) setState(state stateType) {+	r.state.Store(state)+}++func (r *runtime) getState() stateType {+	return r.state.Load().(stateType)+}++// Close notify runtime close+func (r *runtime) Close() {+	done := make(chan struct{})+	r.eventChan <- &eventClose{done: done}+	<-done+}++func (r *runtime) checkFailed() {+	ticker := time.NewTicker(checkFailedTimeout)+	for range ticker.C {+		state := r.getState()+		if state == stateFailed {+			r.eventChan <- &eventCheckFailed{}+		} else if state == stateClosed {+			ticker.Stop()+			return+		}+	}+}++func (r *runtime) handleEventCheckFailed() {+	if r.getState() == stateFailed {+		r.startServer()+	}+}++func (r *runtime) handleEventServeFailed(e *eventServeFailed) {+	if r.startNum > e.startNum {+		return+	}+	r.setState(stateFailed)+}++func (r *runtime) handleEventReload(e *eventReload) {+	r.reload(e.nextSuperSpec)+}++func (r *runtime) handleEventClose(e *eventClose) {+	r.closeServer()+	r.pool.close()+	close(e.done)+}++func (r *runtime) reload(nextSuperSpec *supervisor.Spec) {+	r.superSpec = nextSuperSpec+	nextSpec := nextSuperSpec.ObjectSpec().(*Spec)++	r.ipFilters.reloadRules(nextSpec.IPFilter)+	r.pool.reloadRules(nextSuperSpec.Super(), nextSpec.Pool, "")++	// NOTE: Due to the mechanism of supervisor,+	// nextSpec must not be nil, just defensive programming here.+	switch {+	case r.spec == nil && nextSpec == nil:+		logger.Errorf("BUG: nextSpec is nil")+		// Nothing to do.+	case r.spec == nil && nextSpec != nil:+		r.spec = nextSpec+		r.startServer()+	case r.spec != nil && nextSpec == nil:+		logger.Errorf("BUG: nextSpec is nil")+		r.spec = nil+		r.closeServer()+	case r.spec != nil && nextSpec != nil:+		if r.needRestartServer(nextSpec) {+			r.spec = nextSpec+			r.closeServer()+			r.startServer()+		} else {+			r.spec = nextSpec+		}+	}+}++func (r *runtime) startServer() {+	listenAddr, err := net.ResolveUDPAddr("udp", fmt.Sprintf(":%d", r.spec.Port))+	if err != nil {+		r.setState(stateFailed)+		logger.Errorf("parse udp listen addr(%s) failed, err: %+v", r.spec.Port, err)+		return+	}++	r.serverConn, err = net.ListenUDP("udp", listenAddr)+	if err != nil {+		r.setState(stateFailed)+		logger.Errorf("create udp listener(%s) failed, err: %+v", r.spec.Port, err)+		return+	}+	r.setState(stateRunning)++	var cp *connPool+	if r.spec.HasResponse {+		cp = newConnPool()+	}++	go func() {+		defer cp.close()++		buf := iobufferpool.GetIoBuffer(iobufferpool.UDPPacketMaxSize)+		for {+			buf.Reset()+			n, downstreamAddr, err := r.serverConn.ReadFromUDP(buf.Bytes()[:buf.Cap()])+			_ = buf.Grow(n)++			if err != nil {+				if r.getState() != stateRunning {+					return+				}

In a concurrent environment, this check is not safe, the state could become failed and then recovery to running, which leads to the leak of the current goroutine.

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package udpproxy++import (+	"net"+	"sync"++	"github.com/megaease/easegress/pkg/supervisor"+)++const (+	// Category is the category of TCPServer.+	Category = supervisor.CategoryBusinessController++	// Kind is the kind of TCPServer.+	Kind = "UDPServer"+)++func init() {+	supervisor.Register(&UDPServer{})+}++type (+	// UDPServer is Object of udp server.+	UDPServer struct {+		runtime *runtime+	}++	connPool struct {+		pool map[string]net.Conn+		mu   sync.RWMutex+	}+)++// Category get object category+func (u *UDPServer) Category() supervisor.ObjectCategory {+	return Category+}++// Kind get object kind+func (u *UDPServer) Kind() string {+	return Kind+}++// DefaultSpec get default spec of UDPServer+func (u *UDPServer) DefaultSpec() interface{} {+	return &Spec{}+}++// Status get UDPServer status+func (u *UDPServer) Status() *supervisor.Status {+	return &supervisor.Status{}+}++// Close actually close runtime+func (u *UDPServer) Close() {+	u.runtime.Close()+}++// Init initializes UDPServer.+func (u *UDPServer) Init(superSpec *supervisor.Spec) {++	u.runtime = newRuntime(superSpec)+	u.runtime.eventChan <- &eventReload{+		nextSuperSpec: superSpec,+	}+}++// Inherit inherits previous generation of UDPServer.+func (u *UDPServer) Inherit(superSpec *supervisor.Spec, previousGeneration supervisor.Object) {++	u.runtime = previousGeneration.(*UDPServer).runtime+	u.runtime.eventChan <- &eventReload{+		nextSuperSpec: superSpec,+	}+}++func newConnPool() *connPool {+	return &connPool{+		pool: make(map[string]net.Conn),+	}+}++func (c *connPool) get(addr string) net.Conn {+	if c == nil {+		return nil+	}++	c.mu.RLock()+	defer c.mu.RUnlock()+	return c.pool[addr]+}++func (c *connPool) put(addr string, conn net.Conn) {+	if c == nil {+		return+	}++	c.mu.Lock()+	defer c.mu.Unlock()+	c.pool[addr] = conn+}++func (c *connPool) close() {+	if c == nil {+		return+	}++	c.mu.Lock()+	defer c.mu.Unlock()

This lock is useless, when calling close, we have to make sure no other goroutines will access c anymore.

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package tcpproxy++import (+	"errors"+	"io"+	"net"+	"reflect"+	"runtime/debug"+	"sync"+	"sync/atomic"+	"time"++	"github.com/megaease/easegress/pkg/logger"+	"github.com/megaease/easegress/pkg/util/iobufferpool"+	"github.com/megaease/easegress/pkg/util/timerpool"+)++// Connection wrap tcp connection+type Connection struct {+	rawConn   net.Conn+	connected uint32+	closed    uint32++	localAddr  net.Addr+	remoteAddr net.Addr++	lastBytesSizeRead  int64+	lastWriteSizeWrite int64++	readBuffer      iobufferpool.IoBuffer+	writeBuffers    net.Buffers+	ioBuffers       []iobufferpool.IoBuffer+	writeBufferChan chan *[]iobufferpool.IoBuffer++	mu               sync.Mutex+	startOnce        sync.Once+	connStopChan     chan struct{} // use for connection close+	listenerStopChan chan struct{} // use for listener close++	onRead  func(buffer iobufferpool.IoBuffer) // execute read filters+	onClose func(event ConnectionEvent)+}++// NewDownstreamConn wrap connection create from client+// @param remoteAddr client addr for udp proxy use+func NewDownstreamConn(conn net.Conn, remoteAddr net.Addr, listenerStopChan chan struct{}) *Connection {+	clientConn := &Connection{+		connected:  1,+		rawConn:    conn,+		localAddr:  conn.LocalAddr(),+		remoteAddr: conn.RemoteAddr(),++		writeBufferChan: make(chan *[]iobufferpool.IoBuffer, 8),++		mu:               sync.Mutex{},+		connStopChan:     make(chan struct{}),+		listenerStopChan: listenerStopChan,+	}++	if remoteAddr != nil {+		clientConn.remoteAddr = remoteAddr+	} else {+		clientConn.remoteAddr = conn.RemoteAddr() // udp server rawConn can not get remote address+	}+	return clientConn+}++// LocalAddr get connection local addr+func (c *Connection) LocalAddr() net.Addr {+	return c.localAddr+}++// RemoteAddr get connection remote addr(it's nil for udp server rawConn)+func (c *Connection) RemoteAddr() net.Addr {+	return c.rawConn.RemoteAddr()+}++// SetOnRead set connection read handle+func (c *Connection) SetOnRead(onRead func(buffer iobufferpool.IoBuffer)) {+	c.onRead = onRead+}++// OnRead set data read callback+func (c *Connection) OnRead(buffer iobufferpool.IoBuffer) {+	c.onRead(buffer)+}++// SetOnClose set close callback+func (c *Connection) SetOnClose(onclose func(event ConnectionEvent)) {+	c.onClose = onclose+}++// GetReadBuffer get connection red buffer+func (c *Connection) GetReadBuffer() iobufferpool.IoBuffer {+	return c.readBuffer+}++// Start running connection read/write loop+func (c *Connection) Start() {+	c.startOnce.Do(func() {+		c.startRWLoop()+	})+}++// State get connection running state+func (c *Connection) State() ConnState {+	if atomic.LoadUint32(&c.closed) == 1 {+		return ConnClosed+	}+	if atomic.LoadUint32(&c.connected) == 1 {+		return ConnActive+	}+	return ConnInit+}++// GoWithRecover wraps a `go func()` with recover()+func (c *Connection) goWithRecover(handler func(), recoverHandler func(r interface{})) {+	go func() {+		defer func() {+			if r := recover(); r != nil {+				logger.Errorf("tcp connection goroutine panic: %v\n%s\n", r, string(debug.Stack()))+				if recoverHandler != nil {+					go func() {

do we need create another goroutine here?

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package tcpproxy++import (+	"errors"+	"io"+	"net"+	"reflect"+	"runtime/debug"+	"sync"+	"sync/atomic"+	"time"++	"github.com/megaease/easegress/pkg/logger"+	"github.com/megaease/easegress/pkg/util/iobufferpool"+	"github.com/megaease/easegress/pkg/util/timerpool"+)++// Connection wrap tcp connection+type Connection struct {+	rawConn   net.Conn+	connected uint32+	closed    uint32++	localAddr  net.Addr+	remoteAddr net.Addr++	lastBytesSizeRead  int64+	lastWriteSizeWrite int64++	readBuffer      iobufferpool.IoBuffer+	writeBuffers    net.Buffers+	ioBuffers       []iobufferpool.IoBuffer+	writeBufferChan chan *[]iobufferpool.IoBuffer++	mu               sync.Mutex+	startOnce        sync.Once+	connStopChan     chan struct{} // use for connection close+	listenerStopChan chan struct{} // use for listener close++	onRead  func(buffer iobufferpool.IoBuffer) // execute read filters+	onClose func(event ConnectionEvent)+}++// NewDownstreamConn wrap connection create from client+// @param remoteAddr client addr for udp proxy use+func NewDownstreamConn(conn net.Conn, remoteAddr net.Addr, listenerStopChan chan struct{}) *Connection {+	clientConn := &Connection{+		connected:  1,+		rawConn:    conn,+		localAddr:  conn.LocalAddr(),+		remoteAddr: conn.RemoteAddr(),++		writeBufferChan: make(chan *[]iobufferpool.IoBuffer, 8),++		mu:               sync.Mutex{},+		connStopChan:     make(chan struct{}),+		listenerStopChan: listenerStopChan,+	}++	if remoteAddr != nil {+		clientConn.remoteAddr = remoteAddr+	} else {+		clientConn.remoteAddr = conn.RemoteAddr() // udp server rawConn can not get remote address+	}+	return clientConn+}++// LocalAddr get connection local addr+func (c *Connection) LocalAddr() net.Addr {+	return c.localAddr+}++// RemoteAddr get connection remote addr(it's nil for udp server rawConn)+func (c *Connection) RemoteAddr() net.Addr {+	return c.rawConn.RemoteAddr()+}++// SetOnRead set connection read handle+func (c *Connection) SetOnRead(onRead func(buffer iobufferpool.IoBuffer)) {+	c.onRead = onRead+}++// OnRead set data read callback+func (c *Connection) OnRead(buffer iobufferpool.IoBuffer) {+	c.onRead(buffer)+}++// SetOnClose set close callback+func (c *Connection) SetOnClose(onclose func(event ConnectionEvent)) {+	c.onClose = onclose+}++// GetReadBuffer get connection red buffer
// GetReadBuffer get connection read buffer
jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package udpproxy++import (+	"fmt"+	"net"+	"sync/atomic"+	"time"++	"github.com/megaease/easegress/pkg/logger"+	"github.com/megaease/easegress/pkg/util/iobufferpool"+)++type session struct {+	upstreamAddr          string+	downstreamAddr        *net.UDPAddr+	downstreamIdleTimeout time.Duration+	upstreamIdleTimeout   time.Duration++	upstreamConn net.Conn+	writeBuf     chan *iobufferpool.IoBuffer+	stopChan     chan struct{}+	stopped      uint32+}++func newSession(downstreamAddr *net.UDPAddr, upstreamAddr string, upstreamConn net.Conn,+	downstreamIdleTimeout, upstreamIdleTimeout time.Duration) *session {+	s := session{+		upstreamAddr:          upstreamAddr,+		downstreamAddr:        downstreamAddr,+		upstreamConn:          upstreamConn,+		upstreamIdleTimeout:   upstreamIdleTimeout,+		downstreamIdleTimeout: downstreamIdleTimeout,++		writeBuf: make(chan *iobufferpool.IoBuffer, 512),+		stopChan: make(chan struct{}, 1),+	}++	go func() {+		var t *time.Timer+		var idleCheck <-chan time.Time++		if downstreamIdleTimeout > 0 {+			t = time.NewTimer(downstreamIdleTimeout)+			idleCheck = t.C+		}++		for {+			select {+			case <-idleCheck:+				s.Close()+			case buf, ok := <-s.writeBuf:+				if !ok {+					s.Close()+					continue+				}++				if t != nil {+					if !t.Stop() {+						<-t.C+					}+					t.Reset(downstreamIdleTimeout)+				}++				bufLen := (*buf).Len()+				n, err := s.upstreamConn.Write((*buf).Bytes())+				_ = iobufferpool.PutIoBuffer(*buf)++				if err != nil {+					logger.Errorf("udp connection flush data to upstream(%s) failed, err: %+v", upstreamAddr, err)+					s.cleanWriteBuf()+					break

do you mean break the for loop? this can only break the select statement.

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package iobufferpool++import "io"++// IoBuffer io buffer for stream proxy+type IoBuffer interface {

This is an overall comment for the IoBuffer interface and iobufferpool package. From the view of API or interface design, a good practice is to keep the API set as minimal as possible and to keep the APIs orthogonal. It seems that IoBuffer (and the iobufferpool package) provides too many functionalities and wants to do everything related to a buffer, this makes it over complicated. If we check the standard library like bytes.Buffer, sync.Pool, binary and etc., we can find out they all focus on a very simple job and do the job in a clear and correct way. But when we combine all these standard libraries together, they become very very powerful.

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+// Copyright 2017-2018 The NATS Authors

The copyright is not Megaease. Why add this file? no code is using it.

jxd134

comment created time in 3 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package iobufferpool++import (+	"encoding/binary"+	"errors"+	"io"+	"sync"+	"sync/atomic"+)++const (+	// AutoExpand auto expand io buffer+	AutoExpand      = -1+	MinRead         = 1 << 9+	MaxRead         = 1 << 17+	ResetOffMark    = -1+	DefaultSize     = 1 << 4+	MaxBufferLength = 1 << 20+	MaxThreshold    = 1 << 22+)++var nullByte []byte++var (+	// ErrEOF io buffer eof sign+	ErrEOF = errors.New("EOF")+	// ErrInvalidWriteCount io buffer: invalid write count+	ErrInvalidWriteCount = errors.New("io buffer: invalid write count")+)++type pipe struct {+	IoBuffer+	mu sync.Mutex+	c  sync.Cond++	err error+}++func (p *pipe) Len() int {+	p.mu.Lock()+	defer p.mu.Unlock()+	if p.IoBuffer == nil {+		return 0+	}+	return p.IoBuffer.Len()+}++// Read waits until data is available and copies bytes+// from the buffer into p.+func (p *pipe) Read(d []byte) (n int, err error) {+	p.mu.Lock()+	defer p.mu.Unlock()+	if p.c.L == nil {+		p.c.L = &p.mu+	}

Propose move the NewPipeBuffer to avoid checking for each Read/Write operation.

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package iobufferpool++import (+	"sync"+)++const minShift = 6+const maxShift = 18+const errSlot = -1++var bbPool *byteBufferPool++func init() {+	bbPool = newByteBufferPool()+}++// byteBufferPool is []byte pools+type byteBufferPool struct {+	minShift int+	minSize  int+	maxSize  int++	pool []*bufferSlot+}++type bufferSlot struct {+	defaultSize int+	pool        sync.Pool+}++// newByteBufferPool returns byteBufferPool+func newByteBufferPool() *byteBufferPool {+	p := &byteBufferPool{+		minShift: minShift,+		minSize:  1 << minShift,+		maxSize:  1 << maxShift,+	}+	for i := 0; i <= maxShift-minShift; i++ {+		slab := &bufferSlot{+			defaultSize: 1 << (uint)(i+minShift),+		}+		p.pool = append(p.pool, slab)+	}++	return p+}++func (p *byteBufferPool) slot(size int) int {+	if size > p.maxSize {+		return errSlot+	}+	slot := 0+	shift := 0+	if size > p.minSize {+		size--+		for size > 0 {+			size = size >> 1+			shift+++		}+		slot = shift - p.minShift+	}++	return slot+}++func newBytes(size int) []byte {+	return make([]byte, size)+}++// take returns *[]byte from byteBufferPool+func (p *byteBufferPool) take(size int) *[]byte {+	slot := p.slot(size)+	if slot == errSlot {+		b := newBytes(size)+		return &b+	}+	v := p.pool[slot].pool.Get()+	if v == nil {+		b := newBytes(p.pool[slot].defaultSize)+		b = b[0:size]+		return &b+	}+	b := v.(*[]byte)+	*b = (*b)[0:size]+	return b+}++// give returns *[]byte to byteBufferPool+func (p *byteBufferPool) give(buf *[]byte) {+	if buf == nil {+		return+	}+	size := cap(*buf)+	slot := p.slot(size)+	if slot == errSlot {+		return+	}+	if size != int(p.pool[slot].defaultSize) {+		return+	}+	p.pool[slot].pool.Put(buf)+}++// ByteBufferPoolContainer byte buffer pool container+type ByteBufferPoolContainer struct {+	bytes []*[]byte+	*byteBufferPool+}++// NewByteBufferPoolContainer construct byte buffer pool container+func NewByteBufferPoolContainer() *ByteBufferPoolContainer {+	return &ByteBufferPoolContainer{+		byteBufferPool: bbPool,+	}+}++// Reset clean byte buffer pool container resource+func (c *ByteBufferPoolContainer) Reset() {+	for _, buf := range c.bytes {+		c.give(buf)+	}+	c.bytes = c.bytes[:0]+}++// Take append *[]byte with fixed size from byteBufferPool+func (c *ByteBufferPoolContainer) Take(size int) *[]byte {+	buf := c.take(size)+	c.bytes = append(c.bytes, buf)+	return buf+}++// GetBytes returns *[]byte from byteBufferPool+func GetBytes(size int) *[]byte {+	return bbPool.take(size)+}++// PutBytes Put *[]byte to byteBufferPool+func PutBytes(buf *[]byte) {

propose to change parameter & return value type to []byte

jxd134

comment created time in 3 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package iobufferpool++import (+	"sync"+)++const minShift = 6+const maxShift = 18+const errSlot = -1++var bbPool *byteBufferPool++func init() {+	bbPool = newByteBufferPool()+}++// byteBufferPool is []byte pools+type byteBufferPool struct {+	minShift int+	minSize  int+	maxSize  int++	pool []*bufferSlot+}++type bufferSlot struct {+	defaultSize int+	pool        sync.Pool+}++// newByteBufferPool returns byteBufferPool+func newByteBufferPool() *byteBufferPool {+	p := &byteBufferPool{+		minShift: minShift,+		minSize:  1 << minShift,+		maxSize:  1 << maxShift,+	}+	for i := 0; i <= maxShift-minShift; i++ {+		slab := &bufferSlot{+			defaultSize: 1 << (uint)(i+minShift),+		}+		p.pool = append(p.pool, slab)+	}++	return p+}++func (p *byteBufferPool) slot(size int) int {+	if size > p.maxSize {+		return errSlot+	}+	slot := 0+	shift := 0+	if size > p.minSize {+		size--+		for size > 0 {+			size = size >> 1+			shift+++		}+		slot = shift - p.minShift+	}++	return slot+}++func newBytes(size int) []byte {+	return make([]byte, size)+}++// take returns *[]byte from byteBufferPool+func (p *byteBufferPool) take(size int) *[]byte {

propose to rename it to get, and I think changing the type of the return value to []byte will be better.

jxd134

comment created time in 3 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

 func New() *Options { 	opt.flags.StringSliceVar(&opt.ClusterAdvertiseClientURLs, "cluster-advertise-client-urls", []string{"http://localhost:2379"}, "List of this member’s client URLs to advertise to the rest of the cluster.") 	opt.flags.StringSliceVar(&opt.ClusterInitialAdvertisePeerURLs, "cluster-initial-advertise-peer-urls", []string{"http://localhost:2380"}, "List of this member’s peer URLs to advertise to the rest of the cluster.") 	opt.flags.StringSliceVar(&opt.ClusterJoinURLs, "cluster-join-urls", nil, "List of URLs to join, when the first url is the same with any one of cluster-initial-advertise-peer-urls, it means to join itself, and this config will be treated empty.")-	opt.flags.StringVar(&opt.APIAddr, "api-addr", "localhost:2381", "Address([host]:port) to listen on for administration traffic.")+	opt.flags.StringVar(&opt.APIAddr, "api-addr", "localhost:2381", "HostPort([host]:port) to listen on for administration traffic.")

I think we don't need this change.

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package iobufferpool++import (+	"encoding/binary"+	"errors"+	"io"+	"sync"+	"sync/atomic"+)++const (+	// AutoExpand auto expand io buffer+	AutoExpand      = -1+	MinRead         = 1 << 9+	MaxRead         = 1 << 17+	ResetOffMark    = -1+	DefaultSize     = 1 << 4+	MaxBufferLength = 1 << 20+	MaxThreshold    = 1 << 22+)++var nullByte []byte++var (+	// ErrEOF io buffer eof sign+	ErrEOF = errors.New("EOF")+	// ErrInvalidWriteCount io buffer: invalid write count+	ErrInvalidWriteCount = errors.New("io buffer: invalid write count")+)++type pipe struct {+	IoBuffer+	mu sync.Mutex+	c  sync.Cond++	err error+}++func (p *pipe) Len() int {+	p.mu.Lock()+	defer p.mu.Unlock()+	if p.IoBuffer == nil {+		return 0+	}+	return p.IoBuffer.Len()+}++// Read waits until data is available and copies bytes+// from the buffer into p.+func (p *pipe) Read(d []byte) (n int, err error) {+	p.mu.Lock()+	defer p.mu.Unlock()+	if p.c.L == nil {+		p.c.L = &p.mu+	}+	for {+		if p.IoBuffer != nil && p.IoBuffer.Len() > 0 {+			return p.IoBuffer.Read(d)+		}+		if p.err != nil {+			return 0, p.err+		}+		p.c.Wait()+	}+}++var errClosedPipeWrite = errors.New("write on closed buffer")++// Write copies bytes from p into the buffer and wakes a reader.+// It is an error to write more data than the buffer can hold.+func (p *pipe) Write(d []byte) (n int, err error) {+	p.mu.Lock()+	defer p.mu.Unlock()+	if p.c.L == nil {+		p.c.L = &p.mu+	}+	defer p.c.Signal()+	if p.err != nil {+		return 0, errClosedPipeWrite+	}+	return len(d), p.IoBuffer.Append(d)+}++// CloseWithError causes the next Read (waking up a current blocked+// Read if needed) to return the provided err after all data has been+// read.+//+// The error must be non-nil.+func (p *pipe) CloseWithError(err error) {+	if err == nil {+		err = io.EOF+	}+	p.mu.Lock()+	defer p.mu.Unlock()+	if p.c.L == nil {+		p.c.L = &p.mu+	}+	p.err = err+	defer p.c.Signal()+}++// NewPipeBuffer create pipe buffer with fixed capacity+func NewPipeBuffer(capacity int) IoBuffer {+	return &pipe{+		IoBuffer: newIoBuffer(capacity),+	}+}++// ioBuffer is an implementation of IoBuffer+type ioBuffer struct {+	buf     []byte // contents: buf[off : len(buf)]+	off     int    // read from &buf[off], write to &buf[len(buf)]+	offMark int+	count   int32+	eof     bool++	b *[]byte+}++func newIoBuffer(capacity int) IoBuffer {+	buffer := &ioBuffer{+		offMark: ResetOffMark,+		count:   1,+	}+	if capacity <= 0 {+		capacity = DefaultSize+	}+	buffer.b = GetBytes(capacity)+	buffer.buf = (*buffer.b)[:0]+	return buffer+}++// NewIoBufferString new io buffer with string+func NewIoBufferString(s string) IoBuffer {+	if s == "" {+		return newIoBuffer(0)+	}+	return &ioBuffer{+		buf:     []byte(s),+		offMark: ResetOffMark,+		count:   1,+	}+}++// NewIoBufferBytes new io buffer with bytes array+func NewIoBufferBytes(bytes []byte) IoBuffer {+	if bytes == nil {+		return NewIoBuffer(0)+	}+	return &ioBuffer{+		buf:     bytes,+		offMark: ResetOffMark,+		count:   1,+	}+}++// NewIoBufferEOF new io buffer with eof sign+func NewIoBufferEOF() IoBuffer {+	buf := newIoBuffer(0)+	buf.SetEOF(true)+	return buf+}++func (b *ioBuffer) Read(p []byte) (n int, err error) {+	if b.off >= len(b.buf) {+		b.Reset()++		if len(p) == 0 {+			return+		}++		return 0, io.EOF+	}++	n = copy(p, b.buf[b.off:])+	b.off += n++	return+}++func (b *ioBuffer) Grow(n int) error {+	_, ok := b.tryGrowByReslice(n)++	if !ok {+		b.grow(n)+	}++	return nil+}++func (b *ioBuffer) ReadOnce(r io.Reader) (n int64, err error) {+	var m int++	if b.off > 0 && b.off >= len(b.buf) {+		b.Reset()+	}++	if b.off >= (cap(b.buf) - len(b.buf)) {+		b.copy(0)+	}++	// free max buffers avoid memory leak+	if b.off == len(b.buf) && cap(b.buf) > MaxBufferLength {+		b.Free()+		b.Alloc(MaxRead)+	}++	l := cap(b.buf) - len(b.buf)++	m, err = r.Read(b.buf[len(b.buf):cap(b.buf)])++	b.buf = b.buf[0 : len(b.buf)+m]+	n = int64(m)++	// Not enough space anywhere, we need to allocate.+	if l == m {+		b.copy(AutoExpand)+	}++	return n, err+}++func (b *ioBuffer) ReadFrom(r io.Reader) (n int64, err error) {+	if b.off >= len(b.buf) {+		b.Reset()+	}++	for {+		if free := cap(b.buf) - len(b.buf); free < MinRead {+			// not enough space at end+			if b.off+free < MinRead {+				// not enough space using beginning of buffer;+				// double buffer capacity+				b.copy(MinRead)+			} else {+				b.copy(0)+			}+		}++		m, e := r.Read(b.buf[len(b.buf):cap(b.buf)])++		b.buf = b.buf[0 : len(b.buf)+m]+		n += int64(m)++		if e == io.EOF {+			break+		}++		if m == 0 {+			break+		}++		if e != nil {+			return n, e+		}+	}++	return+}++func (b *ioBuffer) Write(p []byte) (n int, err error) {+	m, ok := b.tryGrowByReslice(len(p))++	if !ok {+		m = b.grow(len(p))+	}++	return copy(b.buf[m:], p), nil+}++func (b *ioBuffer) WriteString(s string) (n int, err error) {+	m, ok := b.tryGrowByReslice(len(s))++	if !ok {+		m = b.grow(len(s))+	}++	return copy(b.buf[m:], s), nil+}++func (b *ioBuffer) tryGrowByReslice(n int) (int, bool) {+	if l := len(b.buf); l+n <= cap(b.buf) {+		b.buf = b.buf[:l+n]++		return l, true+	}++	return 0, false+}++func (b *ioBuffer) grow(n int) int {+	m := b.Len()++	// If buffer is empty, reset to recover space.+	if m == 0 && b.off != 0 {+		b.Reset()+	}++	// Try to grow by means of a reslice.+	if i, ok := b.tryGrowByReslice(n); ok {+		return i+	}++	if m+n <= cap(b.buf)/2 {+		// We can slide things down instead of allocating a new+		// slice. We only need m+n <= cap(b.buf) to slide, but+		// we instead let capacity get twice as large so we+		// don't spend all our time copying.+		b.copy(0)+	} else {+		// Not enough space anywhere, we need to allocate.+		b.copy(n)+	}++	// Restore b.off and len(b.buf).+	b.off = 0+	b.buf = b.buf[:m+n]++	return m+}++func (b *ioBuffer) WriteTo(w io.Writer) (n int64, err error) {+	for b.off < len(b.buf) {+		nBytes := b.Len()+		m, e := w.Write(b.buf[b.off:])++		if m > nBytes {+			panic(ErrInvalidWriteCount)+		}++		b.off += m+		n += int64(m)++		if e != nil {+			return n, e+		}++		if m == 0 || m == nBytes {+			return n, nil+		}+	}++	return+}++func (b *ioBuffer) WriteByte(p byte) error {+	m, ok := b.tryGrowByReslice(1)++	if !ok {+		m = b.grow(1)+	}++	b.buf[m] = p+	return nil+}++func (b *ioBuffer) WriteUint16(p uint16) error {+	m, ok := b.tryGrowByReslice(2)++	if !ok {+		m = b.grow(2)+	}++	binary.BigEndian.PutUint16(b.buf[m:], p)+	return nil+}++func (b *ioBuffer) WriteUint32(p uint32) error {+	m, ok := b.tryGrowByReslice(4)++	if !ok {+		m = b.grow(4)+	}++	binary.BigEndian.PutUint32(b.buf[m:], p)+	return nil+}++func (b *ioBuffer) WriteUint64(p uint64) error {+	m, ok := b.tryGrowByReslice(8)++	if !ok {+		m = b.grow(8)+	}++	binary.BigEndian.PutUint64(b.buf[m:], p)+	return nil+}++func (b *ioBuffer) Append(data []byte) error {+	if b.off >= len(b.buf) {+		b.Reset()+	}++	dataLen := len(data)++	if free := cap(b.buf) - len(b.buf); free < dataLen {+		// not enough space at end+		if b.off+free < dataLen {+			// not enough space using beginning of buffer;+			// double buffer capacity+			b.copy(dataLen)+		} else {+			b.copy(0)+		}+	}++	m := copy(b.buf[len(b.buf):len(b.buf)+dataLen], data)+	b.buf = b.buf[0 : len(b.buf)+m]++	return nil+}++func (b *ioBuffer) AppendByte(data byte) error {+	return b.Append([]byte{data})+}++func (b *ioBuffer) Peek(n int) []byte {+	if len(b.buf)-b.off < n {+		return nil+	}++	return b.buf[b.off : b.off+n]+}++func (b *ioBuffer) Mark() {+	b.offMark = b.off+}++func (b *ioBuffer) Restore() {+	if b.offMark != ResetOffMark {+		b.off = b.offMark+		b.offMark = ResetOffMark+	}+}

If my understanding is correct, Mark and Restore are to save the current reading offset and restore it later. But from the view of API design, this is not a good solution, a better way is to follow the implementation of os.File.Seek.

jxd134

comment created time in 2 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package iobufferpool++import "io"++// IoBuffer io buffer for stream proxy+type IoBuffer interface {+	// Read reads the next len(p) bytes from the buffer or until the buffer+	// is drained. The return value n is the number of bytes read. If the+	// buffer has no data to return, err is io.EOF (unless len(p) is zero);+	// otherwise it is nil.+	Read(p []byte) (n int, err error)++	// ReadOnce make a one-shot read and appends it to the buffer, growing+	// the buffer as needed. The return value n is the number of bytes read. Any+	// error except io.EOF encountered during the read is also returned. If the+	// buffer becomes too large, ReadFrom will panic with ErrTooLarge.+	ReadOnce(r io.Reader) (n int64, err error)++	// ReadFrom reads data from r until ErrEOF and appends it to the buffer, growing+	// the buffer as needed. The return value n is the number of bytes read. Any+	// error except io.EOF encountered during the read is also returned. If the+	// buffer becomes too large, ReadFrom will panic with ErrTooLarge.+	ReadFrom(r io.Reader) (n int64, err error)++	// Grow updates the length of the buffer by n, growing the buffer as+	// needed. The return value n is the length of p; err is always nil. If the+	// buffer becomes too large, Write will panic with ErrTooLarge.+	Grow(n int) error++	// Write appends the contents of p to the buffer, growing the buffer as+	// needed. The return value n is the length of p; err is always nil. If the+	// buffer becomes too large, Write will panic with ErrTooLarge.+	Write(p []byte) (n int, err error)++	// WriteString appends the string to the buffer, growing the buffer as+	// needed. The return value n is the length of s; err is always nil. If the+	// buffer becomes too large, Write will panic with ErrTooLarge.+	WriteString(s string) (n int, err error)++	// WriteByte appends the byte to the buffer, growing the buffer as+	// needed. The return value n is the length of s; err is always nil. If the+	// buffer becomes too large, Write will panic with ErrTooLarge.+	WriteByte(p byte) error++	// WriteUint16 appends the uint16 to the buffer, growing the buffer as+	// needed. The return value n is the length of s; err is always nil. If the

Ditto, and comments of other functions in this file have the similar issue.

jxd134

comment created time in 3 days

Pull request review commentmegaease/easegress

add tcp/udp proxy feature

+/*+ * Copyright (c) 2017, MegaEase+ * All rights reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ *     http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++package iobufferpool++import (+	"sync"+)++const minShift = 6+const maxShift = 18+const errSlot = -1++var bbPool *byteBufferPool++func init() {+	bbPool = newByteBufferPool()+}++// byteBufferPool is []byte pools+type byteBufferPool struct {+	minShift int+	minSize  int+	maxSize  int++	pool []*bufferSlot+}++type bufferSlot struct {+	defaultSize int+	pool        sync.Pool+}++// newByteBufferPool returns byteBufferPool+func newByteBufferPool() *byteBufferPool {+	p := &byteBufferPool{+		minShift: minShift,+		minSize:  1 << minShift,+		maxSize:  1 << maxShift,+	}+	for i := 0; i <= maxShift-minShift; i++ {+		slab := &bufferSlot{+			defaultSize: 1 << (uint)(i+minShift),+		}+		p.pool = append(p.pool, slab)+	}++	return p+}++func (p *byteBufferPool) slot(size int) int {+	if size > p.maxSize {+		return errSlot+	}+	slot := 0+	shift := 0+	if size > p.minSize {+		size--+		for size > 0 {+			size = size >> 1+			shift+++		}+		slot = shift - p.minShift+	}++	return slot+}++func newBytes(size int) []byte {+	return make([]byte, size)+}++// take returns *[]byte from byteBufferPool+func (p *byteBufferPool) take(size int) *[]byte {+	slot := p.slot(size)+	if slot == errSlot {+		b := newBytes(size)+		return &b+	}+	v := p.pool[slot].pool.Get()+	if v == nil {+		b := newBytes(p.pool[slot].defaultSize)+		b = b[0:size]+		return &b+	}

propose to use sync.Pool.New for this.

jxd134

comment created time in 2 days