profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/danielfernandez/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Daniel Fernández danielfernandez Spain Software Engineer. Author of Open Source technologies @thymeleaf, @unbescape, @attoparser, jasypt, @javatuples, @op4j, @javaruntype and @javagalician

danielfernandez/reactive-matchday 82

Spring 5 showcase application with a Thymeleaf HTML5 interface

danielfernandez/ncomplo 2

Pet project for keeping a league based on guessing match results in a sports championship.

danielfernandez/hikari-test-mysql 1

Test for Hikari's closing of MySQL unclosed connections at data source shutdown.

danielfernandez/geotools 0

Official GeoTools repository

danielfernandez/graphql-java-map-alias-test 0

Test GraphQL-java's issue with aliases in Map-based output

danielfernandez/guestbook 0

Guestbook sample Java web-application

danielfernandez/hadoop 0

Apache Hadoop

danielfernandez/migrate_from_bugzilla 0

bugzilla to redmine migration rake task

danielfernandez/olingo-odata4 0

Mirror of Apache Olingo

push eventdanielfernandez/hadoop

Steve Loughran

commit sha f75070f548a71991a5b495832a02bc15c156728b

HADOOP-17511. Add audit/telemetry logging to S3A connector Notion of AuditSpan which is created for a given operation; goal is to pass it along everywhere. It's thread local per FS-instance; store operations pick this up in their constructor from the StoreContext. The entryPoint() method in S3A FS has been enhanced to initiate the spans. For this to work, internal code SHALL NOT call those entry points (Done) and all public API points MUST be declared as entry points. This is done, with a marker attribute @AuditEntryPoint to indicate this. The audit span create/deactivate sequence is ~the same as the duration tracking so the operation is generally merged: most of the metrics S3AFS collects are now durations Part of the isolation into spans means that there's explicit operations for mkdirs() and getContentSummary() The auditing is intended to be a plugin point; currently there is the LoggingAuditor which logs at debug - adds an HTTP "referer" header with audit tracing - can be set to raise an exception if the SDK is handed an AWS Request and there's no active span (skipped for the multipart upload part and complete calls as TransferManager in the SDK does that out of span). NoopAuditor which: - does nothing Change-Id: If11a2c48b00db530fb6bc1ad363e24b202acb827 HADOOP-17511 Auditing: getContentSummary and dynamic evaluation (wip) * added getContentSummary as a single span, with some minor speedups * still trying to do best design to wire up dynamically evaluated attributes Currently logs should include the current thread ID; we don't yet pick up and include the thread where the span was created, which is equally important Change-Id: Ieea88e4228da0ac4761d8c006051cd1095c5fce8 HADOOP-17511. Audit Spans to have unique IDs. + an interface ActiveThreadSpanSource to give current thread span. This is to allow integrations with the AWS SDK &C to query the active span for an FS and immediately be able to identify span by ID, for logging etc. Adding a unique ID to all audit spans and supporting active thread span (with activate/deactivate) causes major changes in the no-op code, as suddenly there's a lot more state there across manager, auditor and span. will be testable though. Change-Id: Id4dddcab7b735bd01f3d3b8a8236ff6da8f97671 HADOOP-17511. Audit review * SpanIDs must now be unique over time (helps in log analysis) * All AWS SDK events go to AuditSpan * FS.access() check also goes to Auditor. This is used by Hive Change-Id: Id1ffffd928f2e274f1bac73109d16e6624ba0e9d HADOOP-17511. Audit -timestamp, headers and handlers - timestamp of span creation picked up (epoch millis in UTC) and passed in to referrer - docs on referrer fields - section on privacy implications - referrer header can be disabled (privacy...) - custom request handlers will be created (TODO: tests) Change-Id: I6e94b43a209eee53748ac14270f318352d512fb8 HADOOP-17511: Unit test work There's lots of implicit tests in the functional suites, but this adds tests for * binding of both bundled auditors * adding extra request handlers * wiring up of context to span * and to TransferManager lifecycle events * common context static and dynamic eval * WiP: parsing back of the http referrer header. This gives reasonable coverage of what's going on, though another day's work would round it out. Change-Id: I6b2d0f1dff223875268c18ded481d9e9fea2f250 HADOOP-17511. Unit and integration tests * Tests of audit execution and stats update. * Integration test suite for S3A.access() * Tuning of implementation classes for ease of testing. * Exporting auditor from S3AFS. * More stats collected * Move AuditCheckingException under CredentialInitializationException so that s3a translateException doesn't need changing. * audit p1 & p2 paths moving to be key only * auditing docs includes real s3 logs and breakdown of referrer (TODO update) The main remaining bits of testing would be to take existing code and verify that the headers got through, especially some of the commit logic and job ID. Distcp might if the jobID is in the config. Change-Id: I5723db55ba189f6c400cf29a90aa5605b0d98ad0 HADOOP-17511. Improving auditor binding inside AWS SDK Audit opname in Span callbacks; always set active span from request. This allows the active operation to always be determined, including from any other plugins (Cred provider, signer...) used in same execution. This is also why Auditor is now in StoreContext. Tests for RequestFactory. Change-Id: I9528253cf21253e14714b838d3a8ae85d52ba8b7 HADOOP-17511. checkstyle and more testing Change-Id: If12f8204237eb0d79f2edcff03fc45f31b7d196a HADOOP-17511. Auditing: move AuditSpan and common context to hadoop-common Small move of code, changes in imports more traumatic Change-Id: Ide158d884bd7a873e07f0ddaff8334882eb28595 HADOOP-17511. Auditing * avoiding accidentally deactivating spans * caching of and switching to active span in rename/delete callbacks * consistent access/use of AuditorId for FS ID * which is printed in S3AFileSystem.toString(). * S3Guard Tools doing more for audit; also printing IOStats on -verbose. * Store Context taking/returning an AuditSpanSource, not the full OperationAuditor interface. Change-Id: Ifc5f807a2d3a329b8a1184dd1fcba63205c1f174 HADOOP-17511. Auditing - marker tool. Marker tool always uses a span; this is created in the FS, rather than the marker Change-Id: I03e31dd58c76a41e8a1b73e958b130ed405a29fe HADOOP-17511. Auditing -explicit integration tests. Tests to deliberately create problems and so verify that things fail as expected. Change-Id: If2e863cee54aa303c24a3d02174e466c272f24b2 HADOOP-17511. Auditing code/docs cleanup * Move audit classes in hadoop-common into their own package * move ActiveThreadSpanSource interface there and implement in S3AFS. that's not for public consumption, but it may be good to have there so that abfs can implement the same API. Change-Id: I1b7d924555a1294f7acb3f47dc613adc32ffb003 HADOOP-17511. Auditing: S3 Log parser Pattern with real tests Show everything works with a full parse of the output captured from a log entry of a real test run. This is the most complicated Regexp I have ever written. Change-Id: I090b2dcefad9938bea3b95ef717a7cb2e9eea10c HADOOP-17511 add filtering of header fields; with docs Change-Id: I0da9487a708b5a8fd700ffddd0290d6c0621f3e2 HADOOP-17511. Audit: add o.a.h.fs.audit package with the public classes. fix up yetus complaints, where valid. Change-Id: I98e4f7a9c277c993555db6d62a20f2a00515c5e8 HADOOP-17511 review * Moved HttpReferrerAuditHeader class * Added version string to URL * Explained some design decisions in the architecture doc * Added mukund's comments Change-Id: I356e5428c51f74b25584bfb1674296ac193c81d5

view details

Steve Loughran

commit sha 6c3f8bfad173d0e3aff6a94e9c0650dd3ee97d62

HADOOP-17511. Regression: Test failure in TestHttpReferrerAuditHeader Change-Id: Ic6ce36df6a3f6e12ed4ee8b6829460e8e875121b

view details

Steve Loughran

commit sha 0a495b58820bfba6a9e782b501015689dd45a116

HADOOP-17511. How to set up logging in a bucket. Change-Id: I10c39db39f62af5606974ff38d497b36dbfb6823

view details

Daniel Fernández

commit sha 2ec9ff04e9e7afc1a509b60451ab8bdd554a4c09

Set version to 3.3.1_HADOOP-17705-17771

view details

push time in 12 hours

push eventdanielfernandez/hadoop

Daniel Fernández

commit sha 318b7cd84b2e191a7398b0544e8c470854550922

Set version to 3.3.1_HADOOP-17705-17771

view details

push time in 12 hours

create barnchdanielfernandez/hadoop

branch : release-3.3.1_HADOOP-17705-17771

created branch time in 13 hours

push eventdanielfernandez/hadoop

Mehakmeet Singh

commit sha a8848a20983d2f4e5ed4b8d872709f141bf26315

HADOOP-17705. S3A to add Config to set AWS region

view details

Mehakmeet Singh

commit sha 55a42a07af14217d9a601746d961fa90c48f4e0b

HADOOP-17705. yetus fix

view details

Mehakmeet Singh

commit sha 53caaa61d3f8699abf81676c3c5067b4cceb5111

HADOOP-17705. checkstyle

view details

Mehakmeet Singh

commit sha 929046eb3206fb7ecef0b39b49719f60596ffd64

HADOOP-17705. documentation

view details

Mehakmeet Singh

commit sha b6896ae132da76df255c170000370d590d74c076

HADOOP-17705. white-space fix

view details

push time in 3 days

create barnchdanielfernandez/hadoop

branch : release-3.3.1_HADOOP-17705

created branch time in 3 days

issue commentthymeleaf/thymeleaf

Provide support for Servlet 5.0

Understood, thanks @wilkinsona

chkal

comment created time in 7 days