profile
viewpoint
Oliver Albertini oliverralbertini Pivotal @pivotal-cf SF Bay Area

oliverralbertini/kinesis-freestyle-pro-configs 3

Layouts and settings for Kinesis Freestyle Pro keyboard.

oliverralbertini/CtCI 1

Working on problems from Cracking the Coding Interview (doing them in C).

amilkh/git-author 0

An easy way to setup multiple authors based on `git commit --template`. It depends on git-together

oliverralbertini/ale 0

Check syntax in Vim asynchronously and fix files, with Language Server Protocol (LSP) support

oliverralbertini/bash-it 0

A community Bash framework.

oliverralbertini/bitlbee-facebook 0

Facebook protocol plugin for BitlBee

oliverralbertini/euler 0

Some Project Euler problems in python.

oliverralbertini/fisheryates 0

Package Fisher-Yates shuffles collections of data in the Go Programming Language.

oliverralbertini/git-resource 0

tracks commits in a branch of a Git repository

issue openedgreenplum-db/pxf

ERROR: failed sending to remote component data movement between vertica and Greenplum

Hi,

when we are trying to push data we are getting error. some times data will get inserted and at some times we will face this issue.
we are trying topush 500 Million records some times it will not fail and some times it will fail. please find query and sample error below.

Query: INSERT INTO external_table1 select * from Table2 limit 500000000

Error: This same appears in he pxf logs on all segment hosts /usr/local/pxf-user/logs/localhost.2020-11-24.log Nov 24, 2020 7:55:03 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet [PXF REST Service] in context with path [/pxf] threw exception [javax.servlet.ServletException: java.sql.BatchUpdateException: [Vertica]VJDBC One or more rows were rejected by the server.] with root cause java.sql.BatchUpdateException: [Vertica]VJDBC One or more rows were rejected by the server. at com.vertica.jdbc.common.SStatement.processBatchResults(Unknown Source) at com.vertica.jdbc.common.SPreparedStatement.executeBatch(Unknown Source) at com.vertica.jdbc.VerticaJdbc4PreparedStatementImpl.executeBatch(Unknown Source) at com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:128) at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java) at org.greenplum.pxf.plugins.jdbc.writercallable.BatchWriterCallable.call(BatchWriterCallable.java:73) at org.greenplum.pxf.plugins.jdbc.JdbcAccessor.writeNextObject(JdbcAccessor.java:247) at org.greenplum.pxf.service.bridge.WriteBridge.setNext(WriteBridge.java:78) at org.greenplum.pxf.service.rest.WritableResource.stream(WritableResource.java:138) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) at javax.servlet.http.HttpServlet.service(HttpServlet.java:728) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:145) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:158) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:165) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:1025) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:452) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1201) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:654) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.base/java.lang.Thread.run(Thread.java:834)

Nov 24, 2020 7:55:29 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet [PXF REST Service] in context with path [/pxf] threw exception [javax.servlet.ServletException: java.io.IOException: Invalid chunk header] with root cause java.io.IOException: Invalid chunk header at org.apache.coyote.http11.filters.ChunkedInputFilter.throwIOException(ChunkedInputFilter.java:615) at org.apache.coyote.http11.filters.ChunkedInputFilter.doRead(ChunkedInputFilter.java:192) at org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:316) at org.apache.coyote.Request.doRead(Request.java:442) at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:290) at org.apache.tomcat.util.buf.ByteChunk.checkEof(ByteChunk.java:431) at org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:369) at org.apache.catalina.connector.InputBuffer.readByte(InputBuffer.java:304) at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:106) at java.base/java.io.DataInputStream.readInt(DataInputStream.java:392) at org.greenplum.pxf.api.io.GPDBWritable.readPktLen(GPDBWritable.java:158) at org.greenplum.pxf.api.io.GPDBWritable.readFields(GPDBWritable.java:180) at org.greenplum.pxf.service.BridgeInputBuilder.makeInput(BridgeInputBuilder.java:60) at org.greenplum.pxf.service.bridge.WriteBridge.setNext(WriteBridge.java:69) at org.greenplum.pxf.service.rest.WritableResource.stream(WritableResource.java:138) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) at javax.servlet.http.HttpServlet.service(HttpServlet.java:728) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:145) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:158) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:165) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:1025) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:452) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1201) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:654) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.base/java.lang.Thread.run(Thread.java:834)

Nov 24, 2020 7:55:29 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet [PXF REST Service] in context with path [/pxf] threw exception [javax.servlet.ServletException: java.io.IOException: Invalid chunk header] with root cause java.io.IOException: Invalid chunk header at org.apache.coyote.http11.filters.ChunkedInputFilter.throwIOException(ChunkedInputFilter.java:615) at org.apache.coyote.http11.filters.ChunkedInputFilter.doRead(ChunkedInputFilter.java:192) at org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:316) at org.apache.coyote.Request.doRead(Request.java:442) at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:290) at org.apache.tomcat.util.buf.ByteChunk.checkEof(ByteChunk.java:431) at org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:369) at org.apache.catalina.connector.InputBuffer.readByte(InputBuffer.java:304) at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:106) at java.base/java.io.DataInputStream.readInt(DataInputStream.java:392) at org.greenplum.pxf.api.io.GPDBWritable.readPktLen(GPDBWritable.java:158) at org.greenplum.pxf.api.io.GPDBWritable.readFields(GPDBWritable.java:180) at org.greenplum.pxf.service.BridgeInputBuilder.makeInput(BridgeInputBuilder.java:60) at org.greenplum.pxf.service.bridge.WriteBridge.setNext(WriteBridge.java:69) at org.greenplum.pxf.service.rest.WritableResource.stream(WritableResource.java:138) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) at javax.servlet.http.HttpServlet.service(HttpServlet.java:728) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:145) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:158) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:165) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:1025) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:452) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1201) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:654) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.base/java.lang.Thread.run(Thread.java:834)

Nov 24, 2020 7:55:34 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet [PXF REST Service] in context with path [/pxf] threw exception [javax.servlet.ServletException: java.io.IOException: Invalid chunk header] with root cause java.io.IOException: Invalid chunk header at org.apache.coyote.http11.filters.ChunkedInputFilter.throwIOException(ChunkedInputFilter.java:615) at org.apache.coyote.http11.filters.ChunkedInputFilter.doRead(ChunkedInputFilter.java:192) at org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:316) at org.apache.coyote.Request.doRead(Request.java:442) at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:290) at org.apache.tomcat.util.buf.ByteChunk.checkEof(ByteChunk.java:431) at org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:369) at org.apache.catalina.connector.InputBuffer.readByte(InputBuffer.java:304) at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:106) at java.base/java.io.DataInputStream.readInt(DataInputStream.java:392) at org.greenplum.pxf.api.io.GPDBWritable.readPktLen(GPDBWritable.java:158) at org.greenplum.pxf.api.io.GPDBWritable.readFields(GPDBWritable.java:180) at org.greenplum.pxf.service.BridgeInputBuilder.makeInput(BridgeInputBuilder.java:60) at org.greenplum.pxf.service.bridge.WriteBridge.setNext(WriteBridge.java:69) at org.greenplum.pxf.service.rest.WritableResource.stream(WritableResource.java:138) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) at javax.servlet.http.HttpServlet.service(HttpServlet.java:728) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.greenplum.pxf.service.servlet.SecurityServletFilter.lambda$doFilter$0(SecurityServletFilter.java:145) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.greenplum.pxf.service.servlet.SecurityServletFilter.doFilter(SecurityServletFilter.java:158) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:165) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:1025) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:452) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1201) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:654) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.base/java.lang.Thread.run(Thread.java:834) Thanks Joe

created time in 5 days

create barnchgreenplum-db/pxf

branch : ext-table-to-fdw

created branch time in 6 days

push eventgreenplum-db/pxf

Francisco Guerrero

commit sha 86119dfd55c0c5ffb1bf261be007d00fac33f5b1

WASB: Add missing dependency (#488) After the Spring Boot merge, it appears there's a missing dependency for WASB. This commit adds the missing dependency

view details

Francisco Guerrero

commit sha 0aa66cdf4ecb9a2f2d2a9e4dbc39732cd4655f6e

Certification: Use tagged pxf_src for the certification pipeline [ci skip] (#489) pxf_src might not have the right scripts to run the automation tests because the src might be out of sync with the released PXF version. For that reason we use the tagged version of pxf_src to test the released PXF artifact.

view details

Lisa Owen

commit sha d924bb744717dbfa902628980f183d2d8bd6f2b2

docs - add template upgrade topics for pxf v6 (#487) * docs - add template upgrade topics for v6 * overview page to link to 5->6 upgrade topic

view details

Francisco Guerrero

commit sha 99613090bcae47a33c1392dbcb0d8196a9e24ed1

Remove invalid GemFireXD profile

view details

Lisa Owen

commit sha b44d00a03d84ad79ad01893c54724bda84179d0c

docs - remove deprecated-in-v5 config props and vars (#492) * docs - remove deprecated-in-v5 config props and vars * use code font for props, are->were

view details

Lisa Owen

commit sha 7a06fe2517b098cab1e8f4c997c3749cf491f44c

docs - add note about non-support of hive 3 managed tables (#494) * docs - add note about non-support of hive 3 managed tables * href to jdbc topic

view details

Francisco Guerrero

commit sha ec3b9864b39ffce777916d7eabefa1969890746d

Upgrade to Spring Boot 2.4.0

view details

Francisco Guerrero

commit sha 905b47d883711c57f977f82216b7a1785ce82ac3

Serialize Fragment metadata using Kryo (#486) In PXF 5.16.0, we introduced Kryo to serialize Fragment's userData for Hive profiles. The original version of boot, combines Fragment metadata and userData into metadata. It also serializes metadata into a escaped JSON, that is then deserialized by the HttpRequestParser during the bridge call. This introduces a regression in the Hive metadata optimization. To reduce the payload size, we now serialize metadata using kryo instead of JSON. This commit fixes the regression introduced by the original version of boot.

view details

Lav Jain

commit sha 0725f860700085b5c697ca0d81a35d2c04ccce4f

Convert PXF-CLI to use go modules instead of dep (#457)

view details

Francisco Guerrero

commit sha 440f6ff52c70d26f227e09213c8bd3c9228417c3

Update pxf-build-base dependency building after Spring Boot merge (#490)

view details

Francisco Guerrero

commit sha 64f0101b942b0885106d202a6de15c47567f1e45

Docker: Remove go dep from Dockerfiles (#499)

view details

Francisco Guerrero

commit sha 2c33c7a12c343e2e31ccce1f051b1be954d1831a

Support pushing predicates of type varchar (#498) This commit adds support for pushing predicates for varchar types. For predicates on varchar types the node is of type `RelabelType` which wraps the Var that we need for the predicate.

view details

Francisco Guerrero

commit sha b80ff530c8b54f281f5f982f5dcf8d1db3006f7b

Docker: Additional cleanup - Remove tomcat chown + linking as tomcat is no longer a dependency - Rename go-dep-cached-sources to go-mod-cached-sources

view details

Lisa Owen

commit sha 398ddef3b38c29b7c7563dbd2ac6db9f57271e66

docs - install file/dir locations; misc install-related edits (#493) * docs - install file/dir locations; misc install-related edits * misc edits requested by david * some of the requested edits from review * include more info about PXF_BASE on the install file page

view details

Alexander Denissov

commit sha 35a7a1678153f23a478b22496dc0c85c02a24a6a

Refactored CLI multi-node tests (#501)

view details

Francisco Guerrero

commit sha f1f3871e7db2cec436076bae948d8720d92c7de3

pxf-build-base: Cache gradle in the build dependencies gradle is not being cached as part of the build dependencies because we are using an image that already has gradle. By switching to the java 8 image, we ensure that gradle will be downloaded to the ~/.gradle directory during dependency cache building.

view details

Francisco Guerrero

commit sha b637b126cbb0b500af2c17b1f047641f6106dcee

Docs: Update parquet support for IN operator

view details

Francisco Guerrero

commit sha 7912ad4293d07f50864e82e1a9728d83733a6f53

Restore FDW Build (#506) - Modify the top-level Makefile to support building the FDW extension and external table extension depending on the PG_CONFIG version - Add new fdw make target in the top-level Makefile - Add GP7 RHEL7 build job to the pxf-build pipeline - Add GP7 RHEL7 build job to the dev pipelines - Fix Java 11 compilation issues - Add GP7 RHEL7 build job to the PR pipeline - Fix FDW compilation issues against GP7 - Fix FDW write, do not execute write for FDW when the exec_location is all segments and the role is master. - Add RPM spec file for GP7 - Add support for `pxf [cluster] register` for GP7. `pxf [cluster] register` will now register the pxf_fdw extension for the GP7 RPM. - Update README to include Java 11 build support - Master build is now moved to the pxf-build pipeline

view details

Francisco Guerrero

commit sha bfdadc72d0140e2bb2c29471f873a0530c4226f6

CI: Fix CI groups for pxf-build [ci skip]

view details

Francisco Guerrero

commit sha f3ec794309ae951d528e16bf3a32d714dc8da3b5

Add PXF upgrade test Test upgrading from the Latest released version of PXF 5 to the PXF 6 version in the multinode environment.

view details

push time in 6 days

PR opened greenplum-db/pxf

Reviewers
Docs: Add support for VARCHAR
+1 -1

0 comment

1 changed file

pr created time in 6 days

create barnchgreenplum-db/pxf

branch : docs-ppd-varchar

created branch time in 6 days

Pull request review commentgreenplum-db/pxf

Add support for Aliyun Object Storage Service (OSS)

 public String getExternalTablePath(String basePath, String path) {             return StringUtils.removeStart(path, basePath);         }     },+    OSS("oss"),

yeah, I think we need a good name for aliyun's service. OSS is what their service is called ... but it's pretty confusing>.

frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Add support for Aliyun Object Storage Service (OSS)

 ifneq "$(PROTOCOL)" "" 			sed $(SED_OPTS) "s|YOUR_AZURE_BLOB_STORAGE_ACCOUNT_NAME|$(WASB_ACCOUNT_NAME)|" $(PROTOCOL_HOME)/$(PROTOCOL)-site.xml; \ 			sed $(SED_OPTS) "s|YOUR_AZURE_BLOB_STORAGE_ACCOUNT_KEY|$(WASB_ACCOUNT_KEY)|" $(PROTOCOL_HOME)/$(PROTOCOL)-site.xml; \ 		fi; \+		if [ $(PROTOCOL) = oss ]; then \+			if [ -z "$(OSS_ACCESS_KEY_ID)" ] || [ -z "$(OSS_SECRET_ACCESS_KEY)" ] || [ -z "$(OSS_ENDPOINT)" ]; then \+				echo "Aliyun Keys or Endpoint (OSS_ACCESS_KEY_ID, OSS_SECRET_ACCESS_KEY, OSS_ENDPOINT) not set"; \+				rm -rf $(PROTOCOL_HOME); \+				exit 1; \+			fi; \+			sed $(SED_OPTS) "s|YOUR_OSS_ACCESS_KEY_ID|$(OSS_ACCESS_KEY_ID)|" $(PROTOCOL_HOME)/$(PROTOCOL)-site.xml; \

yes

frankgh

comment created time in 7 days

push eventgreenplum-db/pxf

Francisco Guerrero

commit sha bfdadc72d0140e2bb2c29471f873a0530c4226f6

CI: Fix CI groups for pxf-build [ci skip]

view details

push time in 7 days

push eventgreenplum-db/pxf

Francisco Guerrero

commit sha 7912ad4293d07f50864e82e1a9728d83733a6f53

Restore FDW Build (#506) - Modify the top-level Makefile to support building the FDW extension and external table extension depending on the PG_CONFIG version - Add new fdw make target in the top-level Makefile - Add GP7 RHEL7 build job to the pxf-build pipeline - Add GP7 RHEL7 build job to the dev pipelines - Fix Java 11 compilation issues - Add GP7 RHEL7 build job to the PR pipeline - Fix FDW compilation issues against GP7 - Fix FDW write, do not execute write for FDW when the exec_location is all segments and the role is master. - Add RPM spec file for GP7 - Add support for `pxf [cluster] register` for GP7. `pxf [cluster] register` will now register the pxf_fdw extension for the GP7 RPM. - Update README to include Java 11 build support - Master build is now moved to the pxf-build pipeline

view details

push time in 7 days

delete branch greenplum-db/pxf

delete branch : build-fdw

delete time in 7 days

PR merged greenplum-db/pxf

Reviewers
Restore FDW Build
  • Modify the top-level Makefile to support building the FDW extension and external table extension depending on the PG_CONFIG version
  • Add new fdw make target in the top-level Makefile
  • Add GP7 RHEL7 build job to the pxf-build pipeline
  • Add GP7 RHEL7 build job to the dev pipelines
  • Fix Java 11 compilation issues
  • Add GP7 RHEL7 build job to the PR pipeline
  • Fix FDW compilation issues against GP7
  • Fix FDW write, do not execute write for FDW when the exec_location is all segments and the role is master.
  • Add RPM spec file for GP7
  • Add support for pxf [cluster] register for GP7. pxf [cluster] register will now register the pxf_fdw extension for the GP7 RPM.

This PR does not address restoring FDW test. It does not take care of building Ubuntu artifacts for GP7.

+401 -135

0 comment

29 changed files

frankgh

pr closed time in 7 days

Pull request review commentgreenplum-db/pxf

Add Support for reading ORC without Hive

+package org.greenplum.pxf.api.filter;++import org.greenplum.pxf.api.io.DataType;++import java.util.List;++/**+ * Transforms IN operator into a chain of OR operators. This transformer is+ * useful for predicate builders that do not support the IN operator.+ */+public class InOperatorTransformer implements TreeVisitor {++    /**+     * {@inheritDoc}+     */+    @Override+    public Node before(Node node, int level) {+        return node;+    }++    /**+     * {@inheritDoc}+     */+    @Override+    public Node visit(Node node, int level) {++        if (node instanceof OperatorNode) {++            OperatorNode operatorNode = (OperatorNode) node;++            if (operatorNode.getOperator() == Operator.IN+                    && operatorNode.getLeft() instanceof ColumnIndexOperandNode+                    && operatorNode.getRight() instanceof CollectionOperandNode) {++                ColumnIndexOperandNode columnNode = (ColumnIndexOperandNode) operatorNode.getLeft();+                CollectionOperandNode collectionOperandNode = (CollectionOperandNode) operatorNode.getRight();+                List<String> data = collectionOperandNode.getData();+                DataType type = collectionOperandNode.getDataType().getTypeElem() != null+                        ? collectionOperandNode.getDataType().getTypeElem()+                        : collectionOperandNode.getDataType();++                // Transform the IN operator into a chain of ORs

data size can be at least 1 up to n. When the size is 1 then we just transform in to eq, otherwise we return a chain of ORs

frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Add Support for reading ORC without Hive

+package org.greenplum.pxf.api.filter;++import org.greenplum.pxf.api.io.DataType;++import java.util.List;++/**+ * Transforms IN operator into a chain of OR operators. This transformer is+ * useful for predicate builders that do not support the IN operator.+ */+public class InOperatorTransformer implements TreeVisitor {++    /**+     * {@inheritDoc}+     */+    @Override+    public Node before(Node node, int level) {+        return node;+    }++    /**+     * {@inheritDoc}+     */+    @Override+    public Node visit(Node node, int level) {++        if (node instanceof OperatorNode) {

I originally had the code like that, but then other implementations of TreeVisitor follow this pattern. So, I preferred to keep it like that for consistency across the code. I also prefer early returns.

frankgh

comment created time in 7 days

push eventgreenplum-db/pxf

Francisco Guerrero

commit sha 0d1ae3f8ae7c66fa475d52204caec9f690379d33

Address PR feedback

view details

push time in 7 days

Pull request review commentgreenplum-db/pxf

Add Support for reading ORC without Hive

+package org.greenplum.pxf.plugins.hdfs.filter;++import org.greenplum.pxf.api.filter.ColumnIndexOperandNode;+import org.greenplum.pxf.api.filter.Node;+import org.greenplum.pxf.api.filter.Operator;+import org.greenplum.pxf.api.filter.OperatorNode;+import org.greenplum.pxf.api.filter.ScalarOperandNode;+import org.greenplum.pxf.api.filter.TreeVisitor;+import org.greenplum.pxf.api.io.DataType;+import org.greenplum.pxf.api.utilities.Utilities;++/**+ * Transforms non-logical operator nodes that have scalar operand nodes as its+ * children of BPCHAR type and which values have whitespace at the end of the+ * string.+ */+public class BPCharOperatorTransformer implements TreeVisitor {+    @Override+    public Node before(Node node, final int level) {+        return node;+    }++    @Override+    public Node visit(Node node, int level) {++        if (node instanceof OperatorNode) {

Same comment as above about converting to guard clauses with early returns.

frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Add Support for reading ORC without Hive

+package org.greenplum.pxf.api.filter;++import org.greenplum.pxf.api.io.DataType;++import java.util.List;++/**+ * Transforms IN operator into a chain of OR operators. This transformer is+ * useful for predicate builders that do not support the IN operator.+ */+public class InOperatorTransformer implements TreeVisitor {++    /**+     * {@inheritDoc}+     */+    @Override+    public Node before(Node node, int level) {+        return node;+    }++    /**+     * {@inheritDoc}+     */+    @Override+    public Node visit(Node node, int level) {++        if (node instanceof OperatorNode) {++            OperatorNode operatorNode = (OperatorNode) node;++            if (operatorNode.getOperator() == Operator.IN+                    && operatorNode.getLeft() instanceof ColumnIndexOperandNode+                    && operatorNode.getRight() instanceof CollectionOperandNode) {++                ColumnIndexOperandNode columnNode = (ColumnIndexOperandNode) operatorNode.getLeft();+                CollectionOperandNode collectionOperandNode = (CollectionOperandNode) operatorNode.getRight();+                List<String> data = collectionOperandNode.getData();+                DataType type = collectionOperandNode.getDataType().getTypeElem() != null+                        ? collectionOperandNode.getDataType().getTypeElem()+                        : collectionOperandNode.getDataType();++                // Transform the IN operator into a chain of ORs

is data.size() guaranteed to have size 2? If not, doesn't the transform tree look more like

      (or)
      /  \
   (or)  (eq)
   /  \
(eq)  (eq)
frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Add Support for reading ORC without Hive

+package org.greenplum.pxf.api.filter;++import org.greenplum.pxf.api.io.DataType;++import java.util.List;++/**+ * Transforms IN operator into a chain of OR operators. This transformer is+ * useful for predicate builders that do not support the IN operator.+ */+public class InOperatorTransformer implements TreeVisitor {++    /**+     * {@inheritDoc}+     */+    @Override+    public Node before(Node node, int level) {+        return node;+    }++    /**+     * {@inheritDoc}+     */+    @Override+    public Node visit(Node node, int level) {++        if (node instanceof OperatorNode) {

Could we convert these to guard clauses with early returns?

if (!(node instandeof OperatorNode)) {
    return node
}

if (operatorNode.getOperator() != Operator.IN || !(operatorNode.getLeft() instanceof ColumnIndexOperationNode) || !(operatorNode.getRight() instanceof CollectionOperandNode) {
    return node;
}

That way the bulk of the method isn't doubly indented.

frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Restore FDW Build

 GPHOME=/usr/local/greenplum-db-${GPDB_VERSION} function install_gpdb() {     local pkg_file     if command -v rpm; then-	    pkg_file=$(find "${GPDB_PKG_DIR}" -name "greenplum-db-${GPDB_VERSION}-rhel*-x86_64.rpm")-	    echo "Installing RPM ${pkg_file}..."-	    rpm --quiet -ivh "${pkg_file}" >/dev/null+        pkg_file=$(find "${GPDB_PKG_DIR}" -name "greenplum-db-${GPDB_VERSION}-rhel*-x86_64.rpm")

I guess I just fixed indentation here, not sure if I want to mess with this code :) Maybe I'll just revert it

frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Restore FDW Build

 fly -t ud set-pipeline \     -v gpdb-branch=6X_STABLE -v pgport=6000 \     -v pxf-tag=<YOUR-TAG> -p dev:longevity_<YOUR-TAG>_6X_STABLE ```--# Deploy `pg_regress` pipeline--This pipeline currently runs the smoke test group against the different clouds using `pg_regress` instead of automation.-It uses both external and foreign tables.-You can adjust the `folder-prefix`, `gpdb-git-branch`, `gpdb-git-remote`, `pxf-git-branch`, and `pxf-git-remote`.-For example, you may want to work off of a development branch for PXF or Greenplum.--```shell-fly -t ud set-pipeline -p pg_regress \-    -c ~/workspace/pxf/concourse/pipelines/pg_regress_pipeline.yml \

not yet, not all the tests have been migrated

frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Add support for Aliyun Object Storage Service (OSS)

 public String getExternalTablePath(String basePath, String path) {             return StringUtils.removeStart(path, basePath);         }     },+    OSS("oss"),

Inline with my comment above, is it possible to make this aliyun-oss?

frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Add support for Aliyun Object Storage Service (OSS)

 ifneq "$(PROTOCOL)" "" 			sed $(SED_OPTS) "s|YOUR_AZURE_BLOB_STORAGE_ACCOUNT_NAME|$(WASB_ACCOUNT_NAME)|" $(PROTOCOL_HOME)/$(PROTOCOL)-site.xml; \ 			sed $(SED_OPTS) "s|YOUR_AZURE_BLOB_STORAGE_ACCOUNT_KEY|$(WASB_ACCOUNT_KEY)|" $(PROTOCOL_HOME)/$(PROTOCOL)-site.xml; \ 		fi; \+		if [ $(PROTOCOL) = oss ]; then \+			if [ -z "$(OSS_ACCESS_KEY_ID)" ] || [ -z "$(OSS_SECRET_ACCESS_KEY)" ] || [ -z "$(OSS_ENDPOINT)" ]; then \+				echo "Aliyun Keys or Endpoint (OSS_ACCESS_KEY_ID, OSS_SECRET_ACCESS_KEY, OSS_ENDPOINT) not set"; \+				rm -rf $(PROTOCOL_HOME); \+				exit 1; \+			fi; \+			sed $(SED_OPTS) "s|YOUR_OSS_ACCESS_KEY_ID|$(OSS_ACCESS_KEY_ID)|" $(PROTOCOL_HOME)/$(PROTOCOL)-site.xml; \

Are these calls to sed supposed to be editing in place? Are you missing -i?

frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Add support for Aliyun Object Storage Service (OSS)

 ifneq "$(PROTOCOL)" "" 			sed $(SED_OPTS) "s|YOUR_AZURE_BLOB_STORAGE_ACCOUNT_NAME|$(WASB_ACCOUNT_NAME)|" $(PROTOCOL_HOME)/$(PROTOCOL)-site.xml; \ 			sed $(SED_OPTS) "s|YOUR_AZURE_BLOB_STORAGE_ACCOUNT_KEY|$(WASB_ACCOUNT_KEY)|" $(PROTOCOL_HOME)/$(PROTOCOL)-site.xml; \ 		fi; \+		if [ $(PROTOCOL) = oss ]; then \+			if [ -z "$(OSS_ACCESS_KEY_ID)" ] || [ -z "$(OSS_SECRET_ACCESS_KEY)" ] || [ -z "$(OSS_ENDPOINT)" ]; then \

I don't know if these are following an existing convention or if they have to be named this way because third-party deps require them, but the OSS_* seems very generic. Could we prepend ALIYUN_ to the environment variables?

frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Add PXF upgrade test

 function run_pxf_installer_scripts() { 			gpstop -u 		fi && 		sed -i '/edw/d' hostfile_all &&-		gpscp -f ~gpadmin/hostfile_all -v -u centos ~gpadmin/install_pxf_dependencies.sh centos@=: &&+		gpscp -f ~gpadmin/hostfile_all -v -u centos -r ~/pxf_installer ~gpadmin/install_pxf_dependencies.sh centos@=: && 		gpssh -f ~gpadmin/hostfile_all -v -u centos -s -e 'sudo ~centos/install_pxf_dependencies.sh' &&-		if [[ ${PXF_COMPONENT} == true ]]; then-			gpscp -f ~gpadmin/hostfile_all -v -u centos -r ~/pxf_tarball centos@=: &&-			gpssh -f ~gpadmin/hostfile_all -v -u centos -s -e 'tar -xzf ~centos/pxf_tarball/pxf-*.tar.gz -C /tmp'-			gpssh -f ~gpadmin/hostfile_all -v -u centos -s -e 'sudo GPHOME=${GPHOME} /tmp/pxf*/install_component'-			gpssh -f ~gpadmin/hostfile_all -v -u centos -s -e 'sudo chown -R gpadmin:gpadmin ${PXF_HOME}'+		gpssh -f ~gpadmin/hostfile_all -v -u centos -s -e 'sudo GPHOME=${GPHOME} ~centos/pxf_installer/install_component'+		gpssh -f ~gpadmin/hostfile_all -v -u centos -s -e 'sudo chown -R gpadmin:gpadmin ${PXF_HOME}'+		if [[ ${PXF_VERSION} == 5 ]]; then+			GPHOME=${GPHOME} PXF_CONF=${BASE_DIR} ${PXF_HOME}/bin/pxf cluster init 		else-			gpscp -f ~gpadmin/hostfile_all -v -u gpadmin -r ~/pxf_tarball gpadmin@=: &&-			gpssh -f ~gpadmin/hostfile_all -v -u gpadmin -s -e 'tar -xzf ~/pxf_tarball/pxf.tar.gz -C ${GPHOME}'+			${PXF_HOME}/bin/pxf cluster register 		fi &&-		${PXF_HOME}/bin/pxf cluster register && 		if [[ -d ~/dataproc_env_files ]]; then 			gpscp -f ~gpadmin/hostfile_init -v -r -u gpadmin ~/dataproc_env_files =: 		fi && 		~gpadmin/configure_pxf.sh && 		gpssh -f ~gpadmin/hostfile_all -v -u centos -s -e \"sudo sed -i -e 's/edw0/edw0 hadoop/' /etc/hosts\" &&-		PXF_BASE=${PXF_BASE_DIR} ${PXF_HOME}/bin/pxf cluster sync &&-		PXF_BASE=${PXF_BASE_DIR} ${PXF_HOME}/bin/pxf cluster start &&+		${PXF_HOME}/bin/pxf cluster sync &&+		${PXF_HOME}/bin/pxf cluster start && 		if [[ $INSTALL_GPHDFS == true ]]; then 			gpssh -f ~gpadmin/hostfile_all -v -u centos -s -e '-				sudo cp ${PXF_BASE_DIR}/servers/default/{core,hdfs}-site.xml /etc/hadoop/conf+				sudo cp ${BASE_DIR}/servers/default/{core,hdfs}-site.xml /etc/hadoop/conf 			' 		fi 	" }  function _main() {-	local SCP_FILES=(pxf_tarball cluster_env_files/*)+	mkdir -p /tmp/pxf_installer/+	if [[ -d pxf_tarball ]]; then+		if [[ ${PXF_COMPONENT} == true ]]; then+			mkdir -p /tmp/pxf_inflate+			tar -xzf pxf_tarball/pxf-*.tar.gz -C /tmp/pxf_inflate+			cp /tmp/pxf_inflate/pxf*/* /tmp/pxf_installer/+		else+			cp pxf_tarball/pxf.tar.gz /tmp/pxf_installer+			cat << EOF > /tmp/pxf_installer/install_component

If we change this cat <<-EOF > /tmp/pxf_installer/install_component, then we can indent the here-document with tabs and they will be stripped. This would allow indenting in a natural fashion, although we would have a mix of tabs and spaces in the whole file.

frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Restore FDW Build

 fly -t ud set-pipeline \     -v gpdb-branch=6X_STABLE -v pgport=6000 \     -v pxf-tag=<YOUR-TAG> -p dev:longevity_<YOUR-TAG>_6X_STABLE ```--# Deploy `pg_regress` pipeline--This pipeline currently runs the smoke test group against the different clouds using `pg_regress` instead of automation.-It uses both external and foreign tables.-You can adjust the `folder-prefix`, `gpdb-git-branch`, `gpdb-git-remote`, `pxf-git-branch`, and `pxf-git-remote`.-For example, you may want to work off of a development branch for PXF or Greenplum.--```shell-fly -t ud set-pipeline -p pg_regress \-    -c ~/workspace/pxf/concourse/pipelines/pg_regress_pipeline.yml \

Do we also need to delete this pipeline config?

frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Restore FDW Build

 GPHOME=/usr/local/greenplum-db-${GPDB_VERSION} function install_gpdb() {     local pkg_file     if command -v rpm; then-	    pkg_file=$(find "${GPDB_PKG_DIR}" -name "greenplum-db-${GPDB_VERSION}-rhel*-x86_64.rpm")-	    echo "Installing RPM ${pkg_file}..."-	    rpm --quiet -ivh "${pkg_file}" >/dev/null+        pkg_file=$(find "${GPDB_PKG_DIR}" -name "greenplum-db-${GPDB_VERSION}-rhel*-x86_64.rpm")

Super pedantic nit: It is possible to have rpm installed on Ubuntu. If we are trying to detect the platform, we could use /etc/os-release:

platform="$(. /etc/os-release; echo "${ID}")"
case $platform in
centos)
  ...
  ;;
ubuntu)
  ...
  ;;
*)
  echo "Unsupported operating system ${TARGET_OS}. Exiting..."
  exit 1
  ;;
esac
frankgh

comment created time in 7 days

Pull request review commentgreenplum-db/pxf

Restore FDW Build

 include common.mk +PG_CONFIG = pg_config+ PXF_VERSION ?= $(shell cat version) export PXF_VERSION +SOURCE_EXTENSION_DIR = external-table+TARGET_EXTENSION_DIR = gpextable+ifneq ($(shell $(PG_CONFIG) --version | egrep "PostgreSQL 12"),)+	SOURCE_EXTENSION_DIR = fdw+	TARGET_EXTENSION_DIR = fdw+endif++FDW_SUPPORT = $(shell $(PG_CONFIG) --version | egrep "PostgreSQL 12")

Could you move $FDW_SUPPORT to before line 10? And then use it in the if-statement?

frankgh

comment created time in 7 days

push eventgreenplum-db/pxf

Francisco Guerrero

commit sha c03cedb5706bc333f8590f8991178d7ce8adfc1c

Master build is now moved to the pxf-build pipeline

view details

push time in 7 days

push eventgreenplum-db/pxf

Francisco Guerrero

commit sha c07dbdf9062c12e78fc7038d2aef0c07192f12a6

Update README to include Java 11 build support

view details

push time in 8 days

push eventgreenplum-db/pxf

Francisco Guerrero

commit sha c7e64bbba7a697311258b342ae7a27517e90a20c

Restore FDW Build - Modify the top-level Makefile to support building the FDW extension and external table extension depending on the PG_CONFIG version - Add new fdw make target in the top-level Makefile - Add GP7 RHEL7 build job to the pxf-build pipeline - Add GP7 RHEL7 build job to the dev pipelines - Fix Java 11 compilation issues - Add GP7 RHEL7 build job to the PR pipeline - Fix FDW compilation issues against GP7 - Fix FDW write, do not execute write for FDW when the exec_location is all segments and the role is master. - Add RPM spec file for GP7 - Add support for `pxf [cluster] register` for GP7. `pxf [cluster] register` will now register the pxf_fdw extension for the GP7 RPM.

view details

push time in 8 days

PR opened greenplum-db/pxf

Restore FDW Build
  • Modify the top-level Makefile to support building the FDW extension and external table extension depending on the PG_CONFIG version
  • Add new fdw make target in the top-level Makefile
  • Add GP7 RHEL7 build job to the pxf-build pipeline
  • Add GP7 RHEL7 build job to the dev pipelines
  • Fix Java 11 compilation issues
  • Add GP7 RHEL7 build job to the PR pipeline
  • Fix FDW compilation issues against GP7
  • Fix FDW write, do not execute write for FDW when the exec_location is all segments and the role is master.
  • Add RPM spec file for GP7
  • Add support for pxf [cluster] register for GP7. pxf [cluster] register will now register the pxf_fdw extension for the GP7 RPM.

This PR does not address restoring FDW test. It does not take care of building Ubuntu artifacts for GP7.

+408 -118

0 comment

28 changed files

pr created time in 8 days

more