profile
viewpoint
Tomás Senart tsenart @sourcegraph http://tsenart.me Lucid ramblings in time and space.

tsenart/audiojedit 61

May there be crop - Berlin MHD 2011

tsenart/deadcode 38

Standalone repo of deadcode package from http://github.com/remyoudompheng/go-misc

tsenart/awesome-go 4

A curated list of awesome Go frameworks, libraries and software

tsenart/bitbasket 2

Virtual P2P Space

tsenart/2015-talks 1

Slides from 2015 Talks

tsenart/advent 1

Solutions to https://adventofcode.com/

tsenart/btcchain 1

Package btcchain implements bitcoin block handling and chain selection rules - a package from btcd

tsenart/btcd 1

An alternative full node bitcoin implementation written in Go (golang)

tsenart/btcwire 1

Implements the bitcoin wire protocol - core wire protocol package from btcd

pull request commentsourcegraph/about

Exchange feedback with candidates

@tsenart how were these messages received?

The majority of time there's no significant response from the candidate to this feedback.

nicksnyder

comment created time in 9 hours

issue commentsourcegraph/sourcegraph

redis-cache segfault in single server dogfood instance

@uwedeportivo: Do you plan to submit a bug report to Redis with all of the above?

uwedeportivo

comment created time in 9 hours

Pull request review commentsourcegraph/sourcegraph

external_services: use SQL to get distinct kinds

 func (e *ExternalServicesStore) List(ctx context.Context, opt ExternalServicesLi 	return e.list(ctx, opt.sqlConditions(), opt.LimitOffset) } +// DistinctKinds returns the distinct list of external services kinds that are stored in the database.+func (e *ExternalServicesStore) DistinctKinds(ctx context.Context) ([]string, error) {+	q := sqlf.Sprintf(`+SELECT ARRAY(+	SELECT DISTINCT(kind)::TEXT FROM external_services+)`)

Here's my .psqlrc file. Maybe we can add it to the handbook / on-boarding :D

-- Official docs: http://www.postgresql.org/docs/9.3/static/app-psql.html
-- Unofficial docs: http://robots.thoughtbot.com/improving-the-command-line-postgres-experience

-- Don't display the "helpful" message on startup.
\set QUIET 1
\pset null 'NULL'

-- http://www.postgresql.org/docs/9.3/static/app-psql.html#APP-PSQL-PROMPTING
\set PROMPT1 '%[%033[1m%]%M %n@%/%R%[%033[0m%]%# '
-- PROMPT2 is printed when the prompt expects more input, like when you type
-- SELECT * FROM<enter>. %R shows what type of input it expects.
\set PROMPT2 '[more] %R > '

-- Show how long each query takes to execute
\timing

-- Use best available output format
\x auto
\set VERBOSITY verbose
\set HISTFILE ~/.psql_history- :DBNAME
\set HISTCONTROL ignoredups
\set COMP_KEYWORD_CASE upper
\unset QUIET

\setenv PAGER 'pspg -bX'
\pset border 2
\pset linestyle unicode
unknwon

comment created time in 9 hours

pull request commentsourcegraph/about

Update customer issues process

The downside of using GitHub is that there is no out of the box tool to analyze the data for any MTTR or other types of report.

Hold our collective beers 🍻

pecigonzalo

comment created time in a day

issue commentsourcegraph/sourcegraph

redis-cache segfault in single server dogfood instance

Getting other keys doesn't really crash the process, so far only this one has that effect. Maybe a bug in Redis?

uwedeportivo

comment created time in a day

issue commentsourcegraph/sourcegraph

redis-cache segfault in single server dogfood instance

@uwedeportivo: I couldn't let this go, and tracked down a way to reproduce the segfault via a specific redis command that I traced is being sent to redis (with tcpdump). If you issue a GET "v2:http:http://127.0.0.1:3180/orgs/sourcegraph-testing/repos?sort=created&page=1&per_page=100" command, it makes the redis-server crash (tested locally with the copied redis-cache data directory). I have no idea why. Maybe that blob is too big and redis crashes?

uwedeportivo

comment created time in a day

issue commentsourcegraph/sourcegraph

redis-cache segfault in single server dogfood instance

But that wouldn't explain why redis-cache is the first process to terminate in the logs. I'm stuck and don't have more time to put into this right now.

uwedeportivo

comment created time in a day

issue commentsourcegraph/sourcegraph

redis-cache segfault in single server dogfood instance

You know what, this segfault may be a red herring. I managed to reproduce it consistently by modifying the server binary to not start redis-cache, so that I'd start it manually alongside everything else that server starts, and nothing segfaults that way, but, it does, when I press CTRL+C. So this may mean that another process is actually failing and making everything else terminate, it's only that redis-cache isn't terminating gracefully for some reason.

uwedeportivo

comment created time in a day

issue commentsourcegraph/sourcegraph

redis-cache segfault in single server dogfood instance

I've been debugging this locally after having copied with the redis-cache data directory to my local computer (with kubectl cp, after having created an archive of it in the container's data directory) and having mounted it in the right place (~/.sourcegraph). I can reproduce the segfault locally with:

docker run --publish 7080:7080 --publish 127.0.0.1:3370:3370 --rm --volume ~/.sourcegraph/config:/etc/sourcegraph --volume ~/.sourcegraph/data:/var/opt/sourcegraph sourcegraph/server:insiders

However, when I change the entrypoint to a bash shell and run redis-server /etc/sourcegraph/redis-cache.conf by hand, it doesn't segfault, so there's something else going on. Also, when issue a manual BGREWRITEAOF command with redis-cli it doesn't segfault.

uwedeportivo

comment created time in a day

Pull request review commentsourcegraph/sourcegraph

monitoring(frontend): use ratios instead of hard thresholds

 To learn more about Sourcegraph's alerting, see [our alerting documentation](htt  **Descriptions:** -- _frontend: 5+ hard error search API responses every 5m_+- _frontend: 5%+ hard error search API responses every 5m_ -- _frontend: 20+ hard error search API responses every 5m_+- _frontend: 20%+ hard error search API responses every 5m_

I'd say 2% warning and 5% critical, but that's a very uninformed guess. What alerts would that result in today?

bobheadxi

comment created time in a day

issue commentsourcegraph/sourcegraph

redis-cache segfault in single server dogfood instance

I checked where moduleCreateArgvFromUserFormat is called in the Redis source code, and out of the three places, I think this is the code path that is triggering the segfault: https://sourcegraph.com/github.com/redis/redis@24c539251f368fb597b660de179a10b2d1370ecb/-/blob/src/module.c?subtree=true#L4108

Why? Because of changes introduced in https://github.com/sourcegraph/sourcegraph/pull/8114 , we started issuing BGREWRITEAOF commands on Sourcegraph frontend startup so that the AOF file wouldn't grow out of bounds due to frequent container restarts, and thus, then take a long time to be read on startup.

So I'm pretty sure it's related to that.

uwedeportivo

comment created time in a day

delete branch sourcegraph/about

delete branch : cloud-growth-plan

delete time in a day

push eventsourcegraph/about

Tomás Senart

commit sha 703503af7f1b676e0e4a05bbeaa14fb20b20f48a

cloud: Add Growth plan section (#1357) * cloud: Add Growth plan section This PR adds a Growth plan section proposal to the Cloud team. * Update handbook/engineering/cloud/index.md Co-authored-by: Nick Snyder <nick@sourcegraph.com> Co-authored-by: Nick Snyder <nick@sourcegraph.com>

view details

push time in a day

PR merged sourcegraph/about

Reviewers
cloud: Add Growth plan section

This PR adds a Growth plan section proposal to the Cloud team page.

+10 -4

0 comment

1 changed file

tsenart

pr closed time in a day

Pull request review commentsourcegraph/about

cloud: Add Growth plan section

 Other:  - [Keegan Carruthers-Smith](../../../company/team/index.md#keegan-carruthers-smith) will not be isolating his work to a single team. Instead, he will serially choose tasks that he thinks are important to work on and he will post updates to the most relevant tracking issue on GitHub. This is an experiment for the next month and we will evaluate the outcome on 2020-08-17. Tomás will continue to be his manager during this experiment. -## Hiring status+## Growth plan -_Updated 2020-07-03_+We've validated that while having a team focus on Cloud is very important, there continues to be a need for work to happen on other areas of our backend infrastructure. To that end, we'll split this team into two, Cloud and [Backend Infrastructure](../backend-infrastructure/index.md), as soon as we hired enough that each of those teams could have a minimum of 3 engineers, but for the time being, the Cloud team will have to handle any emerging high priority backend infrastructure goals that aren't directly related to the ultimate goal of the team.

Doesn't yet exist as its own team. The Cloud team fills that role meanwhile, as per documented here.

tsenart

comment created time in a day

Pull request review commentsourcegraph/about

cloud: Add Growth plan section

 Other:  - [Keegan Carruthers-Smith](../../../company/team/index.md#keegan-carruthers-smith) will not be isolating his work to a single team. Instead, he will serially choose tasks that he thinks are important to work on and he will post updates to the most relevant tracking issue on GitHub. This is an experiment for the next month and we will evaluate the outcome on 2020-08-17. Tomás will continue to be his manager during this experiment. -## Hiring status+## Growth plan -_Updated 2020-07-03_+We've validated that while having a team focus on Cloud is very important, there continues to be a need for work to happen on other areas of our backend infrastructure. To that end, we'll split this team into two, Cloud and [Backend Infrastructure](../backend-infrastructure/index.md), as soon as we hired enough that each of those teams could have a minimum of 3 engineers, but for the time being, the Cloud team will have to handle any emerging high priority backend infrastructure goals that aren't directly related to the ultimate goal of the team. -We are hiring for these roles:+Each of these teams should not grow larger than 8 engineers. In the mean-time, Tomás will be the manager of both teams, but we're looking to hire Engineering Managers for both down the line.

We're talking about engineers here because there's a limit to how many direct reports one engineering manager can have before getting spread to thin. 8 is an upper limit.

tsenart

comment created time in a day

push eventsourcegraph/about

Tomás Senart

commit sha 12e81059266e36a063c18ccc91f1e763ed5196b2

Update handbook/engineering/cloud/index.md Co-authored-by: Nick Snyder <nick@sourcegraph.com>

view details

push time in a day

Pull request review commentsourcegraph/sourcegraph

monitoring(frontend): use ratios instead of hard thresholds

 To learn more about Sourcegraph's alerting, see [our alerting documentation](htt  **Descriptions:** -- _frontend: 5+ hard error search API responses every 5m_+- _frontend: 5%+ hard error search API responses every 5m_ -- _frontend: 20+ hard error search API responses every 5m_+- _frontend: 20%+ hard error search API responses every 5m_

Isn't this a bit too high? Shouldn't we be alerted before we reach 20% hard error rate? Similar question for all the other conversions to rates. Do they need a bit more thought?

bobheadxi

comment created time in a day

Pull request review commentsourcegraph/sourcegraph

external_services: add NoNamespace option to the List method

 type ExternalServiceKind struct {  // ExternalServicesListOptions contains options for listing external services. type ExternalServicesListOptions struct {+	// When true, only include external services not under any namespace (i.e. owned by all site admins),+	// and value of NamespaceUserID is ignored.+	NoNamespace bool

Ah, I see! Well, in that case this is fine! Sorry for misunderstanding. :)

unknwon

comment created time in a day

Pull request review commentsourcegraph/sourcegraph

external_services: use SQL to get distinct kinds

 func (e *ExternalServicesStore) List(ctx context.Context, opt ExternalServicesLi 	return e.list(ctx, opt.sqlConditions(), opt.LimitOffset) } +// DistinctKinds returns the distinct list of external services kinds that are stored in the database.+func (e *ExternalServicesStore) DistinctKinds(ctx context.Context) ([]string, error) {+	q := sqlf.Sprintf(`+SELECT ARRAY(+	SELECT DISTINCT(kind)::TEXT FROM external_services+)`)

This can be simplified, but also, I think we need to filter out deleted rows. Check this out, there's a weird deleted "migration" kind in the production db.

localhost sg@sg=# select array_agg(distinct(kind)) from external_services;
┌──────────────────────────────────────────┐
│                array_agg                 │
├──────────────────────────────────────────┤
│ {GITHUB,GITLAB,GITOLITE,migration,OTHER} │
└──────────────────────────────────────────┘
(1 row)

Time: 140.307 ms
localhost sg@sg=# select array_agg(distinct(kind)) from external_services where deleted_at is null;
┌───────────────────────┐
│       array_agg       │
├───────────────────────┤
│ {GITHUB,GITLAB,OTHER} │
└───────────────────────┘
(1 row)

Time: 137.875 ms
localhost sg@sg=# 
unknwon

comment created time in a day

Pull request review commentsourcegraph/sourcegraph

external_services: add NoNamespace option to the List method

 type ExternalServiceKind struct {  // ExternalServicesListOptions contains options for listing external services. type ExternalServicesListOptions struct {+	// When true, only include external services not under any namespace (i.e. owned by all site admins),+	// and value of NamespaceUserID is ignored.+	NoNamespace bool

What did consider to choose to have an additional struct field to indicate this predicate? Would the default zero value of NamespaceUserID not suffice? Postgres "serial" types always start at 1, so we can treat 0 as NoNamespace.

unknwon

comment created time in a day

pull request commenttsenart/vegeta

Implementing Prometheus Exporter and documentation

@flaviostutz: I've been swamped with work but saved some time to look at this on Saturday! Sorry.

flaviostutz

comment created time in a day

push eventsourcegraph/about

Tomás Senart

commit sha 1ae530a247bfbf5413749cdacb67df12a49e48dc

cloud: Add Growth plan section This PR adds a Growth plan section proposal to the Cloud team.

view details

push time in 2 days

PR opened sourcegraph/about

Reviewers
cloud: Add Growth plan section

This PR adds a Growth plan section proposal to the Cloud team.

+6 -4

0 comment

1 changed file

pr created time in 2 days

create barnchsourcegraph/about

branch : cloud-growth-plan

created branch time in 2 days

push eventsourcegraph/about

Tomás Senart

commit sha 4e3f5d822006553136b720fe5ac2b80cedb68118

team: Add my name pronunciation

view details

push time in 2 days

pull request commentsourcegraph/about

cloud: document manual migrations we're performing

I would say this is provisioning and configurations which, in my opinion, is not in the scope of the service.

Fair enough, I agree with this after noodling on your answer. For the record, I was not suggesting the service itself would need to have these administrative privileges, but that instead of continuing to do migrations like we do today (done by the service itself), we could switch to another mechanism that serves all needs. Thanks for elaborating 👍 🙇

slimsag

comment created time in 2 days

Pull request review commentsourcegraph/careers

Initial draft of the full stack engineer job description

+![logo](https://sourcegraph.com/.assets/img/sourcegraph-light-head-logo.svg)++# Full Stack Engineer+Sourcegraph is seeking an engineer that thrives on owning big problems across domains and different levels of the stack — you grok how the ultimate measure of your work is how users experience the work you do, and that means you wear any hat necessary to that end, from product, to developer, and everything in between. You are a polyglot, quick learner and fearless into diving into unknown areas of the systems you work with. You believe that communicating clearly and empathetically and your relationships with others is critical to our success.+## Qualifications

## Qualifications
aidaeology

comment created time in 3 days

Pull request review commentsourcegraph/careers

Initial draft of the full stack engineer job description

+![logo](https://sourcegraph.com/.assets/img/sourcegraph-light-head-logo.svg)++# Full Stack Engineer+Sourcegraph is seeking an engineer that thrives on owning big problems across domains and different levels of the stack — you grok how the ultimate measure of your work is how users experience the work you do, and that means you wear any hat necessary to that end, from product, to developer, and everything in between. You are a polyglot, quick learner and fearless into diving into unknown areas of the systems you work with. You believe that communicating clearly and empathetically and your relationships with others is critical to our success.+## Qualifications++We are looking for a full stack engineer who has strong fundamentals in good software development techniques, design patterns and best practices. In your career you have worked with and refactored existing code bases. You are both productive and pragmatic because you believe software is only useful if it is used. Collaborating with small high performing teams, you have built and deployed production-ready software that delivers value to customers.+* Strong working knowledge with API design and architecture.

* Strong working knowledge with API design and architecture.
aidaeology

comment created time in 3 days

Pull request review commentsourcegraph/careers

Initial draft of the full stack engineer job description

+![logo](https://sourcegraph.com/.assets/img/sourcegraph-light-head-logo.svg)++# Full Stack Engineer
# Full Stack Engineer

aidaeology

comment created time in 3 days

Pull request review commentsourcegraph/sourcegraph

Adding a readonly database user

+begin;+  CREATE USER sgreader with PASSWORD 'sgreader';

Since this is public, I think we should not commit this to git, and should find a way to template this at migration time.

chayim

comment created time in 3 days

pull request commentsourcegraph/about

cloud: document manual migrations we're performing

Thank you all for clarifying.

Lets say we would like to require one or multiple read-only users, we should not impose how those are created as different environments will deploy and configure their databases differently, some might not even grant the Sourcegraph migration script the permissions to create users.

This depends on what requirements we define for supporting external databases. It would certainly simplify things for us if we didn't need a second out of band mechanism for administrative migrations. What would prevent us from requiring customers to connect Sourcegraph to an external Postgres database that is entirely owned by Sourcegraph, and thus, would not encourage admins to fiddle with it manually? In that case, could we not use a single migration mechanism for all kinds of migrations?

slimsag

comment created time in 3 days

issue commentsourcegraph/sourcegraph

It is not called out clearly enough in docs that initial permission synching may take several hours

@slimsag: Please propose a change to those docs, it's not clear to me, at least, how you'd want to make this point number 2 stand out more. Can you clarify?

slimsag

comment created time in 3 days

Pull request review commentsourcegraph/sourcegraph

Make BGREWRITEAOF actually best-effort

 func redisCheck(name, addr string, timeout time.Duration, pool *redis.Pool) sysr 			c := pool.Get() 			defer func() { _ = c.Close() }() -			if err = c.Send("PING"); err != nil {+			if err := c.Send("PING"); err != nil { 				return err 			} -			if err = c.Send("BGREWRITEAOF"); err != nil {+			if err := c.Send("BGREWRITEAOF"); err != nil { 				return err 			} -			if err = c.Flush(); err != nil {+			if err := c.Flush(); err != nil { 				return err 			} -			// We ignore the response from BGREWRITEAOF, as this is best effort.-			_, err = c.Receive()+			_, err := c.Receive()+			if err != nil && strings.HasPrefix(err.Error(), "MISCONF") {

Do we have docs that state a requirement for redis to persist to disk?

Only code docs :( https://sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/internal/redispool/redispool.go?subtree=true#L71

What if a private instance has their own redis cluster that has this option disabled?

Good point, we haven't run into this problem yet, but it could happen.

This is a quick workaround, but if you care about the data you are using it for, you should check to make sure why BGSAVE failed in first place.

From your link. Why did it fail?

efritz

comment created time in 4 days

Pull request review commentsourcegraph/sourcegraph

Make BGREWRITEAOF actually best-effort

 func redisCheck(name, addr string, timeout time.Duration, pool *redis.Pool) sysr 			c := pool.Get() 			defer func() { _ = c.Close() }() -			if err = c.Send("PING"); err != nil {+			if err := c.Send("PING"); err != nil { 				return err 			} -			if err = c.Send("BGREWRITEAOF"); err != nil {+			if err := c.Send("BGREWRITEAOF"); err != nil { 				return err 			} -			if err = c.Flush(); err != nil {+			if err := c.Flush(); err != nil { 				return err 			} -			// We ignore the response from BGREWRITEAOF, as this is best effort.-			_, err = c.Receive()+			_, err := c.Receive()+			if err != nil && strings.HasPrefix(err.Error(), "MISCONF") {

What made this start breaking now? What is this MISCONF substring referring to?

We have redis-store and redis-cache, and while the first is supposed to be persistent, we persist both because cold cache can have a bad performance impact.

efritz

comment created time in 4 days

push eventsourcegraph/sourcegraph

github-actions[bot]

commit sha b79f30bdc948af6dcb9865ae1438145496944edf

Update third-party licenses (#12634) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

view details

push time in 4 days

delete branch sourcegraph/sourcegraph

delete branch : chore/licenses-update

delete time in 4 days

PR merged sourcegraph/sourcegraph

chore: update third-party licenses

This is an automated pull request generated by this run.

+4 -14

0 comment

2 changed files

github-actions[bot]

pr closed time in 4 days

pull request commentsourcegraph/about

cloud: document manual migrations we're performing

There should be no cloud only migrations. Is there a good reason to not add these migrations as normal migrations?

slimsag

comment created time in 6 days

pull request commentsourcegraph/sourcegraph

db: Use mutex around cache

Is it really 3.75GB of memory?

Good call. I think this might be the data size for the full rows which Postgres needs to move around (since it's not a column store) before filtering down to just name and id which is what is returned to the frontend.

ryanslade

comment created time in 7 days

pull request commentsourcegraph/sourcegraph

db: Use mutex around cache

Related to the work you both did here: I played around with the query we are using to load this data. It's a pretty optimal plan. It takes 300ms because it's moving 3.75GB of memory around (all rows were already cached in memory, no disk I/O), so we're actually limited by memory bus bandwidth (12.5 GB / s, extrapolated).

Two remarks:

  1. This is another datapoint that we should be using IDs everywhere rather than names. Loading only the 100k IDs from the default_repos table takes only 15ms (~5MB of data).

  2. Have we considered the additional memory requirements for the frontend? Caching this in memory takes a significant chunk of the available memory.

ryanslade

comment created time in 7 days

push eventsourcegraph/about

Tomás Senart

commit sha 67412168cf3e6ec430e742607f05bbe090326efd

cloud: Clarify team sync goal updates

view details

push time in 7 days

issue commentsourcegraph/sourcegraph

Sourcegraph not recognizing that GitHub Enterprise repos are archived

@dadlerj: If they have exclude: [{"archived": true}], then we should not sync any archived repos. It'd be good to see what the updated_at column for that repo shows — it might be that the syncer is taking a while and hasn't removed that repo yet.

dadlerj

comment created time in 8 days

delete branch sourcegraph/about

delete branch : cloud-updates

delete time in 8 days

push eventsourcegraph/about

Tomás Senart

commit sha b029d40669125b1be188008f8c69c27b6881ce7c

cloud: Team page updates (#1295) * cloud: Team page updates This commit updates the Cloud team page with a number of changes: - Latest planning process - Daily updates - Changing weekly updates to be goal updates rather than personal ones - Retrospectives section - Team syncs - Full stack engineer in Hiring section (job req TBD and discussed with the team in the next sync) * Update handbook/engineering/cloud/index.md

view details

push time in 8 days

PR merged sourcegraph/about

cloud: Team page updates

This commit updates the Cloud team page with a number of changes:

  • Latest planning process
  • Daily updates
  • Changing weekly updates to be goal updates rather than personal ones
  • Retrospectives section
  • Team syncs
  • Full stack engineer in Hiring section (job req TBD and discussed with the team in the next sync)
+53 -1

0 comment

1 changed file

tsenart

pr closed time in 8 days

push eventsourcegraph/about

Tomás Senart

commit sha 75bcdaf80020759443b881bc726a2a6f93cd81c0

Update handbook/engineering/cloud/index.md

view details

push time in 8 days

Pull request review commentsourcegraph/about

cloud: Team page updates

 _Updated 2020-07-03_  We are hiring for these roles: -- +1 [Software Engineer - Backend](https://github.com/sourcegraph/careers/blob/master/job-descriptions/software-engineer-backend.md)+- +1 Software Engineer - Full Stack (Job req TBD)
- +1 [Software Engineer - Backend](https://github.com/sourcegraph/careers/blob/master/job-descriptions/software-engineer-backend.md)
tsenart

comment created time in 8 days

Pull request review commentsourcegraph/about

cloud: Team page updates

 _Updated 2020-07-03_  We are hiring for these roles: -- +1 [Software Engineer - Backend](https://github.com/sourcegraph/careers/blob/master/job-descriptions/software-engineer-backend.md)+- +1 Software Engineer - Full Stack (Job req TBD)

Are we not hiring more backend engineers for this team? I think it might be valuable to change this to a more prose "Growth plan" like other team pages.

It's worth discussing with the team what growth needs we can identify currently beyond the full stack engineer role. I'll come back and convert this to a growth plan like other teams after that discussion (next cloud sync).

tsenart

comment created time in 8 days

Pull request review commentsourcegraph/about

cloud: Team page updates

 The cloud team owns all work that is necessary to build, secure, scale, and oper  ## Processes -We do [weekly check-ins](../tracking_issues.md#using-a-tracking-issue-for-progress-check-ins) and [monthly planning](../tracking_issues.md#planning-a-milestone-with-a-tracking-issue).+### Planning++We meet every month for a planning session as the current iteration approaches its ends. In this meeting we collaborate on what our few and focused goals for the iteration should be, what their scope is and which teammates work on what.+These goals are then captured in an iteration [tracking issue](../tracking_issues.md).++It's fine for an iteration to start with only clear goals and for the specific work to make progress on those to be discovered afterwards.++### Updates++We do regular updates to communicate our progress to members of the team, and to external stakeholders.++#### Daily personal Slack updates++Collaborating across timezones requires regular communication to keep each other updated on progress, and coordinate work handoff if needed. We use daily Slack updates to achieve this.++Every day, Slackbot will post a reminder in the #cloud channel for you to write your daily update.++**At the end of each working day**, you should post your update as a threaded response to the Slackbot message.++You should include in your update:+- What you worked on during your day.+- Whether you're blocked on anything to make progress (a code review, input in an RFC or in a GitHub issue...).+- What you plan on tackling next.++**At the beginning of each working day**, you should read the updates thread for the previous working day, to learn what your teammates have been working on, and check if they need your help.++#### Weekly goal updates in the tracking issue++We use weekly [progress updates in the tracking issue](../tracking_issues.md#progress_updates) to inform external stakeholders of the progress of the team on the iteration goals.++Every Friday, Slackbot will post a reminder in #cloud for us to write weekly progress updates for each goal in the iteration.++The teammates working on a goal are responsible for this update which is essentially the written version of what should be communicated in the Cloud Sync meetings the next Monday.

This makes it sound like we will talk about status in the cloud sync, but I don't think that is true.

I think it makes sense to briefly give goal updates to everyone in the cloud sync, rather than personal updates.

tsenart

comment created time in 8 days

PR opened sourcegraph/about

cloud: Team page updates

This commit updates the Cloud team page with a number of changes:

  • Latest planning process
  • Daily updates
  • Changing weekly updates to be goal updates rather than personal ones
  • Retrospectives section
  • Team syncs
  • Full stack engineer in Hiring section (job req TBD and discussed with the team in the next sync)
+54 -2

0 comment

1 changed file

pr created time in 9 days

create barnchsourcegraph/about

branch : cloud-updates

created branch time in 9 days

Pull request review commentsourcegraph/about

Generalize onboarding process

-# Engineering onboarding+# Onboarding -Welcome to Sourcegraph! This document will guide you through engineering specific onboarding.+Welcome! We're excited to have you join the team. This document outlines the structure of your first few weeks at Sourcegraph. -## Manager checklist+## Guiding principles +### There are no stupid questions++Joining a new company can be overwhelming — there's a lot to learn! As you navigate your first few weeks at Sourcegraph, we want you to know that everyone on the team is here to help, and that there are **no stupid questions**.++Every time you're curious or confused about something — just ask! When you do so, use [public discussion channels](../communication/team_chat.md#avoid_private_messages) as much as possible.++### Think and act like an owner++At Sourcegraph, we don't think of engineers as resources — we think of them as owners of their work, who constantly reevaluate how to use their talents to be as impactful as possible. We value your opinions and ideas. You should always feel empowered to identify potential improvements and act upon them, whether they be improvements to this onboarding process, our handbook and general documentation, our codebase and tooling, or our product.
At Sourcegraph, we don't think of engineers as resources — we think of them as owners of their work, who constantly reevaluate how to use their talents to be as impactful as possible. We value your opinions and ideas. You should always feel empowered to identify potential improvements and act upon them, whether they be improvements to processes (like onboarding), our handbook and general documentation, our codebase and tooling, or our product.
lguychard

comment created time in 9 days

Pull request review commentsourcegraph/about

Generalize onboarding process

 Welcome to Sourcegraph! This document will guide you through engineering specifi   - [Site24x7](https://www.site24x7.com) (optional; recommended for Distribution team members)   - [HoneyComb.io](https://www.honeycomb.io/)   - Ask Christina to send an invite to [Productboard](https://sourcegraph.productboard.com)-- Schedule a recurring [1-1](../leadership/1-1.md).-- Schedule time to discuss Sourcegraph's system architecture and tech stack.-- Schedule a time to discuss on-call rotation.-- Assign one or more starter tasks (e.g. small bugs or issues). -## Engineer checklist+## Weeks 1-3 -As you are completing these tasks, you may notice documentation, processes, or code that is broken or out of date. Your first priority is to fix these things. **Act like you own the onboarding experience of the next engineer that we hire.**+### Starter tasks -- [Complete general onboarding](../people-ops/onboarding.md#for-all-new-teammates)-- [Configure your GitHub notifications.](github-notifications/index.md)-- Message each person on your immediate team to setup a time to meet them. Some things you might want to learn:-  - What brought them to Sourcegraph?-  - What are they currently working on?-  - What do they do for fun?-- Join #dev-announce, #dev-chat, and your team's channel on Slack. [Team chat documentation](../communication/team_chat.md#engineering)+Your manager will assign to you three starter tasks that you should aim to complete in your first three weeks. These tasks are small, atomic, and intended to expose you to different parts of our codebase and product: it's important that you build the flexibility to fix any problem you'll be faced with at Sourcegraph, and don't narrow down your comfort zone to a certain part of our codebase or product.++As you're working on these tasks:+- Aim to be as incremental as possible:+    - Open a pull request as soon as you feel like you're ready for feedback or input on your code — you can make it a draft pull request if your code is still a work in progress.  +    - Favour splitting up your work in multiple pull requests every time it makes sense — shipping frequently is important.+    - Ask yourself what tests are appropriate for the change you're tackling, and add them!+- If you need help, remember that there are [no stupid questions](#there_are_no_stupid_questions) — ask for help in your team's channel (or any appropriate channel), and add the answer to our docs or the handbook if you feel like it can help future teammates.++As you complete these tasks, share your accomplishments in #progress 🙂++### Pairing sessions++Reach out to every member of your direct team, and set up a two-hour pairing session with them. These sessions will be an opportunity to get to know your teammates, understand what they're working on and why, and learn more about our codebase and development flows!++Take the first 20-30 minutes of the session to have an unstructured, introductory chat. Then, start hacking! Your teammate will set up a [live share](https://visualstudio.microsoft.com/services/live-share/) and walk you through what they're currently working on. Ask as many questions as possible, to try to understand:+- What problem is your teammate trying to solve?+- Why is that problem important? What team goal does it fit under?+- What are relevant [RFCs](https://about.sourcegraph.com/handbook/communication/rfcs), [PDs](https://about.sourcegraph.com/handbook/product/product_documents), GitHub issues or documentation pages?+- What parts of our codebase need to be changed to solve this problem?+- How will the code being changed be tested?+- Will there be future work needed after solving this problem? How could you contribute?++**Teammates** should prepare these pairing sessions so that they bring you the most value, and allow you to quickly ramp up on what the team is working on and why.++### Reading material++There will be plenty for you to read and learn about when you're not working on your starter tasks:+- Read through our [architecture overview](https://docs.sourcegraph.com/dev/architecture). - Read through the rest of the engineering handbook to learn more about how we operate.-- Read how we plan and keep each other up to date with [tracking issues](tracking_issues.md).-- Setup your [local development environment](https://github.com/sourcegraph/sourcegraph/blob/master/doc/dev/local_development.md#step-1-install-dependencies). If you encounter any issues, ask for help in Slack and then update the documentation to reflect the resolution (so the next engineer that we hire doesn't run into the same problem).-- Make yourself an admin on sourcegraph.com and sgdev.org by updating the database directly (this is not what a normal user would do, but doing it this way will expose you to useful knowledge). Relevant documentation:-  - Make sure you have an account on our deployments ([sourcegraph.com](https://sourcegraph.com), [sourcegraph.sgdev.org](https://sourcegraph.sgdev.org), [k8s.sgdev.org](https://k8s.sgdev.org)) before editing the database.-  - [Our deployments](deployments.md)-  - [How do I access the Sourcegraph database?](https://docs.sourcegraph.com/admin/faq#how-do-i-access-the-sourcegraph-database)-  - PostgreSQL Hint: Consider using a [transaction](https://www.postgresql.org/docs/current/tutorial-transactions.html) so that you can ROLLBACK your actions in case of a mistake-- Add yourself to the `sourcegraph` org on Sourcegraph.com.-- Start working on the starter tasks that your manager has assigned you.+- Read how we choose and continually update our [goals](../../company/goals/index.md).+- Read how we plan and keep each other up to date with [tracking issues](./tracking_issues.md).++## Weeks 4-6++### Start contributing to your team's goals++By now, you'll have shipped multiple improvements, paired with all members of your immediate team to understand what they were working on, and learned a lot about Sourcegraph. It'll be time for you to start contributing to your team's goals! It'll be up to you to define how you'll accomplish this:+- Pick a current iteration goal you'd like to start contributing to (or work with your team to define these iteration goals if the team is in planning phase).+- Chat with the teammates currently working on that goal, and relevant stakeholders (product, design, CE) to understand the problem being solved.+- Prepare a proposal for how you'll work with the team solving that problem in the following three weeks. Your work may include:+    - Working on previously identified GitHub issues+    - Doing spikes to solve unknowns and learn more about the problem+    - Writing RFCs to propose solutions, and planning resulting development work+- Discuss your plan with the team and your manager.+- Get hacking!++At Sourcegraph, you'll be expected to own the problems your team is solving, and work with the team to define solutions and plan your work — we think it's important that you start doing this early on!++### Give feedback on your onboarding++At the end of week 6 at Sourcegraph, your manager will send you a survey to learn more about what worked and what didn't during your onboarding. Take your time to answer this thoughtfully — your answers will be very important to make sure our onboarding process is even better for future hires!++TODO @lguychard: prepare this
lguychard

comment created time in 9 days

Pull request review commentsourcegraph/about

Generalize onboarding process

-# Engineering onboarding+# Onboarding -Welcome to Sourcegraph! This document will guide you through engineering specific onboarding.+Welcome! We're excited to have you join the team. This document outlines the structure of your first few weeks at Sourcegraph. -## Manager checklist+## Guiding principles +### There are no stupid questions++Joining a new company can be overwhelming — there's a lot to learn! As you navigate your first few weeks at Sourcegraph, we want you to know that everyone on the team is here to help, and that there are **no stupid questions**.++Every time you're curious or confused about something — just ask! When you do so, use [public discussion channels](../communication/team_chat.md#avoid_private_messages) as much as possible.++### Think and act like an owner++At Sourcegraph, we don't think of engineers as resources — we think of them as owners of their work, who constantly reevaluate how to use their talents to be as impactful as possible. We value your opinions and ideas. You should always feel empowered to identify potential improvements and act upon them, whether they be improvements to this onboarding process, our handbook and general documentation, our codebase and tooling, or our product.++Never assume that a problem is somebody else's to fix!++## Getting set up++You'll have to get some basics set up in your first few days:+- Complete [general onboarding](../people-ops/from-graphbook/onboarding.md#for-all-new-teammates)+- [Configure your GitHub notifications.](./github-notifications/index.md)+- Join #dev-announce, #dev-chat, and your team's channel on Slack. [Team chat documentation](../communication/team_chat.md#engineering)
- Join #dev-announce, #dev-chat, and your team's channel on Slack, as well as any other channels you find interesting. [Team chat documentation](../communication/team_chat.md#engineering)
lguychard

comment created time in 9 days

pull request commentsourcegraph/sourcegraph

repo-updater: add HTML version of repo-updater-state page

That'd work!

anukul

comment created time in 10 days

pull request commentsourcegraph/sourcegraph

repo-updater: add HTML version of repo-updater-state page

Changelog entry?

anukul

comment created time in 10 days

push eventsourcegraph/sourcegraph

github-actions[bot]

commit sha c6d280cec8513a8c2f56ae39185cba5acb955a6e

Update third-party licenses (#12488) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

view details

push time in 11 days

delete branch sourcegraph/sourcegraph

delete branch : chore/licenses-update

delete time in 11 days

PR merged sourcegraph/sourcegraph

chore: update third-party licenses

This is an automated pull request generated by this run.

+3 -17

1 comment

3 changed files

github-actions[bot]

pr closed time in 11 days

pull request commentsourcegraph/about

Add new onboarding process for web teammates

I think much of this could be so useful to any other engineer on-boarding at Sourcegraph (regardless of team). Could we make this part of a more general on-boarding doc?

lguychard

comment created time in 14 days

Pull request review commentsourcegraph/sourcegraph

Encrypting tokens

+package secrets++import (+	"crypto/aes"+	"crypto/cipher"+	"crypto/rand"+	"encoding/base64"+	"io"+)++type EncryptionError struct {+	Message string+}++func (err *EncryptionError) Error() string {+	return err.Message+}++type EncryptionStore struct {

Please add a Go doc. Whys is this called a "store"?

chayim

comment created time in 15 days

Pull request review commentsourcegraph/sourcegraph

Encrypting tokens

+package secrets++import (+	"reflect"+	"testing"++	"github.com/sourcegraph/sourcegraph/internal/randstring"+)++// Test that encrypting and decryption the message yields the same value+func TestDBEncryptingAndDecrypting(t *testing.T) {+	// 32 bytes means an AES-256 cipher+	key := []byte(randstring.NewLen(32))+	e := EncryptionStore{EncryptionKey: key}+	toEncrypt := "i am the super secret string, shhhhh"++	encrypted, err := e.Encrypt(toEncrypt)+	if err != nil {+		t.Errorf(err.Error())+	}++	// better way to compare byte arrays+	if reflect.DeepEqual(encrypted, []byte(toEncrypt)) {+		t.Fatal(err)+	}++	decrypted, err := e.Decrypt(encrypted)+	if err != nil {+		t.Fatal(err)+	}++	if decrypted != toEncrypt {+		t.Fatalf("failed to decrypt")+	}++}++// Test the negative result - we should fail to decrypt with bad keys+func TestBadKeysFailToDecrypt(t *testing.T) {+	key := []byte(randstring.NewLen(32))+	e := EncryptionStore{EncryptionKey: key}++	message := "The secret is to bang the rocks together guys."+	encrypted, _ := e.Encrypt(message)+	decrypted, _ := e.Decrypt(encrypted)++	notTheSameKey := []byte(randstring.NewLen(32))+	e.EncryptionKey = notTheSameKey+	decryptAgain, err := e.Decrypt(encrypted)+	if err != nil {+		t.Fatal(err)+	}++	if decrypted == decryptAgain {+		t.Fatal("Should not have been able to decrypt string with a second set of secrets.")+	}++}++// Test that different strings encrypt to different outputs+func TestDifferentOutputs(t *testing.T) {+	key := []byte(randstring.NewLen(32))+	e := EncryptionStore{EncryptionKey: key}+	messages := []string{+		"This may or may",+		"This is not the same as that",+		"The end of that",+		"Plants and animals",+		"Snow, igloos, sunshine, unicords",+	}++	var crypts []string+	for _, m := range messages {+		encrypted, _ := e.Encrypt(m)+		crypts = append(crypts, encrypted)+	}++	for _, c := range crypts {+		if isInSliceOnce(c, crypts) == false {+			t.Fatalf("Duplicate encryption string: %v.", c)+		}+	}+}++func isInSliceOnce(item string, slice []string) bool {+	found := 0+	for _, s := range slice {+		if item == s {+			found+++		}+	}++	return found == 1+}++func TestSampleNoRepeats(t *testing.T) {+	key := []byte(randstring.NewLen(32))+	toEncrypt := "All in, fall in, call in, wall in"+	e := EncryptionStore{EncryptionKey: key}++	var crypts []string+	for i := 0; i < 10000; i++ {+		encrypted, _ := e.Encrypt(toEncrypt)+		crypts = append(crypts, encrypted)+	}++	for _, item := range crypts {+		if isInSliceOnce(item, crypts) == false {+			t.Fatalf("Duplicate encrypted string found.")+		}+	}+}++// Test that rotating keys returns different encrypted strings+func TestDBKeyRotation(t *testing.T) {
func TestKeyRotation(t *testing.T) {
chayim

comment created time in 15 days

Pull request review commentsourcegraph/sourcegraph

Encrypting tokens

+package secrets++import (+	"reflect"+	"testing"++	"github.com/sourcegraph/sourcegraph/internal/randstring"+)++// Test that encrypting and decryption the message yields the same value+func TestDBEncryptingAndDecrypting(t *testing.T) {+	// 32 bytes means an AES-256 cipher+	key := []byte(randstring.NewLen(32))+	e := EncryptionStore{EncryptionKey: key}+	toEncrypt := "i am the super secret string, shhhhh"++	encrypted, err := e.Encrypt(toEncrypt)+	if err != nil {+		t.Errorf(err.Error())+	}++	// better way to compare byte arrays+	if reflect.DeepEqual(encrypted, []byte(toEncrypt)) {+		t.Fatal(err)+	}++	decrypted, err := e.Decrypt(encrypted)+	if err != nil {+		t.Fatal(err)+	}++	if decrypted != toEncrypt {+		t.Fatalf("failed to decrypt")+	}++}++// Test the negative result - we should fail to decrypt with bad keys+func TestBadKeysFailToDecrypt(t *testing.T) {+	key := []byte(randstring.NewLen(32))+	e := EncryptionStore{EncryptionKey: key}++	message := "The secret is to bang the rocks together guys."+	encrypted, _ := e.Encrypt(message)+	decrypted, _ := e.Decrypt(encrypted)++	notTheSameKey := []byte(randstring.NewLen(32))+	e.EncryptionKey = notTheSameKey+	decryptAgain, err := e.Decrypt(encrypted)+	if err != nil {+		t.Fatal(err)+	}++	if decrypted == decryptAgain {+		t.Fatal("Should not have been able to decrypt string with a second set of secrets.")+	}++}++// Test that different strings encrypt to different outputs+func TestDifferentOutputs(t *testing.T) {+	key := []byte(randstring.NewLen(32))+	e := EncryptionStore{EncryptionKey: key}+	messages := []string{+		"This may or may",+		"This is not the same as that",+		"The end of that",+		"Plants and animals",+		"Snow, igloos, sunshine, unicords",+	}++	var crypts []string+	for _, m := range messages {+		encrypted, _ := e.Encrypt(m)+		crypts = append(crypts, encrypted)+	}++	for _, c := range crypts {+		if isInSliceOnce(c, crypts) == false {
		if !isInSliceOnce(c, crypts) {
chayim

comment created time in 15 days

Pull request review commentsourcegraph/sourcegraph

Encrypting tokens

+package secrets++import (+	"reflect"+	"testing"++	"github.com/sourcegraph/sourcegraph/internal/randstring"+)++// Test that encrypting and decryption the message yields the same value+func TestDBEncryptingAndDecrypting(t *testing.T) {+	// 32 bytes means an AES-256 cipher+	key := []byte(randstring.NewLen(32))+	e := EncryptionStore{EncryptionKey: key}+	toEncrypt := "i am the super secret string, shhhhh"++	encrypted, err := e.Encrypt(toEncrypt)+	if err != nil {+		t.Errorf(err.Error())+	}++	// better way to compare byte arrays+	if reflect.DeepEqual(encrypted, []byte(toEncrypt)) {

You should use cmp.Diff which is pretty nice for this use case and used in many places in our codebase. Alternatively, you'd also have bytes.Equal

chayim

comment created time in 15 days

Pull request review commentsourcegraph/sourcegraph

Encrypting tokens

+package secrets++import (+	"reflect"+	"testing"++	"github.com/sourcegraph/sourcegraph/internal/randstring"+)++// Test that encrypting and decryption the message yields the same value+func TestDBEncryptingAndDecrypting(t *testing.T) {

Are we testing any DB things here?

chayim

comment created time in 15 days

pull request commentsourcegraph/sourcegraph

Encrypting tokens

@chayim: Can you please add some context to the PR description?

chayim

comment created time in 15 days

issue commentsourcegraph/sourcegraph

[WIP] Cloud: 3.19 Tracking issue

Ah well, I see we changed the tracking issue template to show unavailability, rather than availability. I think that makes sense.

tsenart

comment created time in 17 days

issue commentsourcegraph/sourcegraph

[WIP] Cloud: 3.19 Tracking issue

@sourcegraph/cloud: Can you please add your availability to the tracking issue in the "Availability" section?

tsenart

comment created time in 17 days

Pull request review commentsourcegraph/about

Add Rollout Process

+# Feature rollout process++Features come in many different sizes and shapes, and the process for introducing new functionality ranges with these differences. For large or significantly impactful changes or changes that simply need a bit more time to bake, it is encouraged that the following rollout process is followed.++## Sourcegraph Cloud++Sourcegraph Cloud is continuously deployed with all new updates to master. We maintain a [releasability contract](../engineering/continuous_releasability.md) and require all new features to be released behind a feature flag to ensure that functionality can be turned off if a problem arises.++### Before merge++- Run hallway tests with internal users+- Complete a final [design review](design/design_process.md#final-review)+- Review documentation+- Review analytics and ensure desired metrics have been added to the feature+- Confirm feature flag functionality

Seems like all of this can actually be after merge and deployment if things are feature flagged?

poojaj-tech

comment created time in 18 days

push eventsourcegraph/about

Tomás Senart

commit sha dea43e6addbe3c95cbd4199bde99b92ac4369842

cloud: Add Goals section (#1226) As per discussion on https://threads.com/34382402992 and a live meeting to come up with the longer term goals for the team.

view details

push time in 18 days

delete branch sourcegraph/about

delete branch : cloud-goals

delete time in 18 days

PR merged sourcegraph/about

cloud: Add Goals section

As per discussion on https://threads.com/34382402992 and a live meeting to come up with the longer term goals for the team.

+5 -1

3 comments

1 changed file

tsenart

pr closed time in 18 days

pull request commentsourcegraph/about

cloud: Add Goals section

This goal is missing two things according to the documentation in #1177:

Since that PR has since been merged, I'd like to discuss those two points in the product eng sync meeting tomorrow. Merging for now.

tsenart

comment created time in 18 days

issue commentsourcegraph/sourcegraph

[WIP] Cloud: 3.19 Tracking issue

In my sync with @christinaforney today, we had the idea of using the new Product Document template to capture the context around Goal 1. The exact format and prompts aren't fixed in stone (and you should change them as you see fit) but I think they provide very good structure, aligned with what we were thinking of writing. What do you think @unknwon, @ryanslade, @asdine? Check it out here: https://docs.google.com/document/d/1MBZxnRlDG69Fyvzpai5rBqxizvX5zVeZiUe6z7VZrjk/edit#heading=h.g0gjwch98szj

tsenart

comment created time in 18 days

Pull request review commentsourcegraph/sourcegraph

authz: simplify `PermsStore` logic with intarray

 func (s *PermsStore) GrantPendingPermissions(ctx context.Context, userID int32, 	} 	up.IDs = roaring.Or(oldIDs, p.IDs) -	up.UpdatedAt = txs.clock()-	if q, err = upsertUserPermissionsBatchQuery(up); err != nil {+	objectIDs := make([]int32, 0, up.IDs.GetCardinality())+	for _, id := range up.IDs.ToArray() {

I don't think you need to call ToArray to iterate over this.

unknwon

comment created time in 18 days

Pull request review commentsourcegraph/sourcegraph

authz: simplify `PermsStore` logic with intarray

 VALUES ON CONFLICT ON CONSTRAINT   user_pending_permissions_service_perm_object_unique DO UPDATE SET-  object_ids = excluded.object_ids,+  object_ids = UNIQ(SORT(user_pending_permissions.object_ids + excluded.object_ids)),

Ditto.

unknwon

comment created time in 18 days

Pull request review commentsourcegraph/sourcegraph

authz: simplify `PermsStore` logic with intarray

 func (s *PermsStore) SetRepoPendingPermissions(ctx context.Context, accounts *ex 	added := roaring.AndNot(p.UserIDs, oldIDs) 	removed := roaring.AndNot(oldIDs, p.UserIDs) -	// Load stored user IDs of both added and removed.-	changedIDs := roaring.Or(added, removed).ToArray()--	// In case there is nothing to add or remove.-	if len(changedIDs) == 0 {+	// In case nothing added or removed.+	if added.GetCardinality() == 0 && removed.GetCardinality() == 0 {

Is there no method on this type like IsEmpty() or something?

unknwon

comment created time in 18 days

Pull request review commentsourcegraph/sourcegraph

authz: simplify `PermsStore` logic with intarray

 VALUES ON CONFLICT ON CONSTRAINT   user_permissions_perm_object_unique DO UPDATE SET-  object_ids = excluded.object_ids,+  object_ids = UNIQ(SORT(user_permissions.object_ids + excluded.object_ids)),

Same as my last comment, I think you can use |.

unknwon

comment created time in 18 days

Pull request review commentsourcegraph/sourcegraph

authz: simplify `PermsStore` logic with intarray

 VALUES ON CONFLICT ON CONSTRAINT   repo_permissions_perm_unique DO UPDATE SET-  user_ids = excluded.user_ids,+  user_ids = UNIQ(SORT(repo_permissions.user_ids + excluded.user_ids)),

I think you can use | here and avoid the UNIQ(SORT(. Please validate.

  user_ids = repo_permissions.user_ids | excluded.user_ids,
unknwon

comment created time in 18 days

issue closedsourcegraph/sourcegraph

Cloud: 3.18 Tracking issue

Status

This tracking issue is being worked on.

Context

This iteration is the first we're working officially as the cloud team on items from the RFC 151 roadmap, which we have no OKRs for yet.

However, there's still work outside of this roadmap that needs to be completed, and as such has been prioritised in this iteration, namely:

  • @unknwon is working on RFC 167: Enforce product tiers, starting with Spike: Enforcing product tiers #11577 and ensuing work is addressing the OKR Product: Deliver on projects to increase and support sales efforts. KR: License keys enable features for each license tier.
  • @unknwon is wrapping up work from RFC 137: Better automated testing. There's no specific OKR for this effort.
  • @keegancsmith is wrapping up work from RFC 136: Navigating Version Contexts with Index multiple (non-master) branches #6728 which directly addresses Product: Deliver on projects to increase and support sales efforts. KR: Users can search across contexts of their organization’s code as outlined in RFC 136.
  • @keegancsmith will be working streaming search support in the context of Sourcegraph Cloud roadmap afterwards — no OKR for this yet.
  • @ryanslade main focus will be on scaling repo-updater as part of the Sourcegraph Cloud roadmap.
  • @daxmc99 main focus will be on RFC 174: HA Postgres for Sourcegraph Cloud #11496
  • @asdine main focus will be on Store git commits in Postgres as part of the Sourcegraph Cloud roadmap.

Availability

Period is from 20th of June to 20th of July. Please write the days you won't be working and the number of working days for the period.

  • @keegancsmith: TBD
  • @tsenart: 21d
  • @unknwon: 19d (National holidays June 25-26)
  • @ryanslade: 21d
  • @daxmc99: 21d
  • @asdine: 20d
  • Chayim Kirshen (Security Engineer starting June 29th): TBD

Workload

<!-- BEGIN WORK --> <!-- BEGIN ASSIGNEE: asdine --> @asdine: 5.00d

  • [x] repository status page with "cloning" filter is extremely slow #11029 4d
    • [x] cloud: Repo-updater store repositories clone status in repo table #11867 :shipit:
  • [ ] mail package deprecated #10703 🧶
  • [ ] Ability to disable SMTP TLS verification #10702 1d 👩
  • [ ] Split resolveRepositories code #10489 :shipit:🧶 <!-- END ASSIGNEE -->

<!-- BEGIN ASSIGNEE: beyang --> @beyang

  • [ ] Sourcegraph.com behaves like customer instances for a subset of repositories #10821 <!-- END ASSIGNEE -->

<!-- BEGIN ASSIGNEE: chayim --> @chayim

  • [ ] Chayim Kirshen On-boarding #11785 <!-- END ASSIGNEE -->

<!-- BEGIN ASSIGNEE: daxmc99 --> @daxmc99: 14.00d

  • [ ] RFC 174: HA Postgres for Sourcegraph Cloud #11496 10d
  • [ ] Add operational playbook for updating external database versions #1099
  • [x] add terraform configuration for sourcegraph.com's infrastructure #10455 2d
  • [x] Terraform migration: Migrate all non-postgres stateful data #11759 1d
  • [x] Terraform migration: Update docs #1115
  • [ ] RFC 174: Terraform Cloud SQL #11794 1d
  • [ ] Sourcegraph.com - add redis-store & precise-code-intel-bundle-manager snapshotting #10450 <!-- END ASSIGNEE -->

<!-- BEGIN ASSIGNEE: keegancsmith --> @keegancsmith: 10.00d

  • [x] Index multiple (non-master) branches #6728 10d 👩 <!-- END ASSIGNEE -->

<!-- BEGIN ASSIGNEE: ryanslade --> @ryanslade: 4.00d

  • [ ] Scale repo-updater WIP #11493 🛠️
  • [x] RFC 132: Use rate limit data when scheduling changeset syncs #9105 2d
  • [x] src actions exec can cause frontend to be OOMKilled #10540 2d 🐛
  • [x] Diff query for unknown revision is slow #11654 <!-- END ASSIGNEE -->

<!-- BEGIN ASSIGNEE: tsenart --> @tsenart

  • [ ] ~Add 'activity histogram' to admin usage stats + rearrange~ #11063 2d
  • [x] engineering: Welcome to the Cloud team, @asdine! #1106 :shipit:
  • [x] team: Welcome to the Cloud, Dax #1057 :shipit: <!-- END ASSIGNEE -->

<!-- BEGIN ASSIGNEE: unknwon --> @unknwon: 13.50d

  • [ ] Simplify PermsStore logic with intarray extension #11768 2d
  • [ ] Use Postgres intarray to store permissions object_ids #11400 3d 🧶
  • [x] gqltest: add dev docs #11578 0.5d
  • [x] RFC 142: Migrate backend e2e test with integration tests #10068 5d 🛠️
  • [x] Dotcom site-admin overview page breaks with anonymous user ID #11801 0.5d 🐛
  • [x] Codecov: use go-acc for better code coverage on master #11781 1d
  • [x] authz: ReposIDsWithOldestPerms breaks when repos are hard-deleted from repo table #11769 0.5d 🐛
  • [x] Add integration tests that saves rows to user_external_accounts then grant permissions #11614 1d 🧶
  • [ ] ~Enforcing product tiers~ #11577 5d 🕵️
  • [ ] ~e2e: remove tests covered by dev/gqltest~ #11762 1d
  • [ ] ~gqltest: re-enable in CI on every branch~ #11579 2d
  • [ ] ~Spike: Pure SQL authz flow~ #11767 3d 🕵️ <!-- END ASSIGNEE --> <!-- END WORK -->

Legend

  • 👩 Customer issue
  • 🐛 Bug
  • 🧶 Technical debt
  • 🛠️ Roadmap
  • 🕵️ Spike
  • 🔒 Security issue
  • :shipit: Pull Request

closed time in 18 days

tsenart

issue commentsourcegraph/sourcegraph

Cloud: 3.18 Tracking issue

Created a first draft of the 3.19 tracking issue: https://github.com/sourcegraph/sourcegraph/issues/12327

Please move over any work you plan to do this iteration and let's have a conversation if that would prevent you from making progress on the core goals for 3.19.

tsenart

comment created time in 18 days

issue openedsourcegraph/sourcegraph

Cloud: 3.19 Tracking issue

Plan

Goal 1: Allow users and organizations to create external services.

@unknwon, @ryanslade and @asdine are working on this together. Figuring out multi-tenancy is going to unlock a lot of the capabilities we want from the Cloud project. External services are at the center of this because they are our current mechanism for specifying which repos to sync. We want to think through the product changes (and consequences) necessary for users and organizations to create external services on any Sourcegraph instance and at least prototype what it would look like. We recognize that we're starting work on this goal in an uphill (i.e. discovery phase) so the amount of planned work will grow over time. We'll gather feedback on the end-user product experience we're thinking of from a larger group including @christinaforney, @sqs and @rrhyne as part of this discovery process.

Goal 2: Store and handle external service and external account secrets securely.

@chayim, and @daxmc99 are working on this together. As we already know, there's a lot of security ground-work to do before we can allow any user or organization to use sourcegraph.com with private code. This is one of the key changes that need to happen and aligns well with the first goal for this iteration.

Leftover

There's some work individual members of the team are doing this iteration that was left over from the previous, namely @unknwon is working on Enforcing Product Tiers #11577.

Availability

If you have planned unavailability this iteration (e.g., vacation), you can note that here.

Tracked issues

<!-- BEGIN WORK --> <!-- END WORK -->

Legend

  • 👩 Customer issue
  • 🐛 Bug
  • 🧶 Technical debt
  • 🛠️ Roadmap
  • 🕵️ Spike
  • 🔒 Security issue
  • :shipit: Pull Request

created time in 18 days

push eventsourcegraph/sourcegraph

github-actions[bot]

commit sha 7fec1bcc70080af496621516526cd67851f6606e

Update third-party licenses (#12116) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

view details

push time in 18 days

delete branch sourcegraph/sourcegraph

delete branch : chore/licenses-update

delete time in 18 days

PR merged sourcegraph/sourcegraph

chore: update third-party licenses

This is an automated pull request generated by this run.

+106 -30

1 comment

1 changed file

github-actions[bot]

pr closed time in 18 days

startedwk8/go-ordered-map

started time in 18 days

issue commentsourcegraph/sourcegraph

Stefan Hengl onboarding plan

Noticed we were using initials and this was still assigned to @rvantonder so made the change :)

tsenart

comment created time in 18 days

pull request commentsourcegraph/sourcegraph

leader: Add a leader election package

I thought we wanted to use Postgres for distributed locks?

ryanslade

comment created time in 21 days

create barnchsourcegraph/about

branch : cloud-goals

created branch time in 21 days

PR opened sourcegraph/about

Reviewers
cloud: Add Goals section

As per discussion on https://threads.com/34382402992 and a live meeting to come up with the longer term goals for the team.

+5 -1

0 comment

1 changed file

pr created time in 21 days

push eventtsenart/vegeta

Yonatan Graber

commit sha 83a3ce91dbff282c3e2dec57a69434563a101a13

Added default first hist bucket if needed (#532)

view details

push time in 23 days

PR merged tsenart/vegeta

Reviewers
Added default first hist bucket if needed

Background

Fixes #522

When reporting hist with buckets that doesn't start with 0, any requests that won't fit the first bucket will fall into the last bucket, skewing the results. Please see discussion in issue #522 .

This PR will check if the first bucket starts with 0. If not, it will create a default bucket for such cases. For example, this input -hist[5ms,10ms,1s] will generate the following buckets: [0,5ms], [5ms,10ms], [10ms,1s], [1s,+Inf] (and not [5ms,10ms], [10ms,1s], [1s,+Inf])

<!-- Required background information to understand the PR. Link here any related issues. -->

Checklist

  • [X] Git commit messages conform to community standards.
  • [X] Each Git commit represents meaningful milestones or atomic units of work.
  • [X] Changed or added code is covered by appropriate tests.
+6 -1

0 comment

2 changed files

yonatang

pr closed time in 23 days

issue closedtsenart/vegeta

Wrong data in report -type hist

Version and Runtime

Version: 12.8.3
Commit: 
Runtime: go1.14.1 linux/amd64
Date: '2020-03-27T03:13:26Z+0000'

Steps to Reproduce

Just take whatever HTTP server you find handy, make it handle one URL. I took mkdir /tmp/ttt; cd /tmp/ttt; touch foo; python3 -m http.server.

Next, launch a load test against that. Test parameters don't matter at all. E.g. this oneliner:

$ echo 'GET http://localhost:8000/foo' \
    | vegeta attack -duration 5s -rate 100 \
    | tee /tmp/reqlog \
    | vegeta report -type hist -buckets [3.3ms,10ms,33ms,100ms,333ms,1s,3s,10s,33s] \
    ; vegeta report -type text < /tmp/reqlog

Actual outcome

Bucket           #    %       Histogram
[3.3ms,  10ms]   86   17.20%  ############
[10ms,   33ms]   2    0.40%   
[33ms,   100ms]  0    0.00%   
[100ms,  333ms]  0    0.00%   
[333ms,  1s]     0    0.00%   
[1s,     3s]     0    0.00%   
[3s,     10s]    0    0.00%   
[10s,    33s]    0    0.00%   
[33s,    +Inf]   412  82.40%  #############################################################

Requests      [total, rate, throughput]         500, 100.20, 100.16
Duration      [total, attack, wait]             4.992s, 4.99s, 2.043ms
Latencies     [min, mean, 50, 90, 95, 99, max]  1.142ms, 2.774ms, 2.674ms, 3.62ms, 3.957ms, 6.433ms, 10.164ms
Bytes In      [total, mean]                     0, 0.00
Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           100.00%
Status Codes  [code:count]                      200:500  
Error Set:

With max latency reported as 10.164ms — how is it possible that the histogram shows 412 requests in the [33s, +Inf] bucket?

Expected outcome

Latency buckets reported correctly.

closed time in 23 days

ulidtko

Pull request review commentsourcegraph/sourcegraph

Fix cloned repositories listing and counting

 func (s DBStore) SetClonedRepos(ctx context.Context, repoNames ...string) error  const setClonedReposQueryFmtstr = ` -- source: cmd/repo-updater/repos/store.go:DBStore.SetClonedRepos-/*-This query generates a diff by selecting only-the repos that need to be updated.-Selected repos will have their cloned column reversed if-their cloned column is true but they are not in cloned_repos-or they are in cloned_repos but their cloned column is false-*/+--+-- This query generates a diff by selecting only+-- the repos that need to be updated.+-- Selected repos will have their cloned column reversed if+-- their cloned column is true but they are not in cloned_repos+-- or they are in cloned_repos but their cloned column is false.+-- WITH cloned_repos AS (-  SELECT jsonb_array_elements_text(%s)+  SELECT jsonb_array_elements_text(%s) AS name ), diff AS (   SELECT id,     cloned   FROM repo   WHERE     NOT cloned-      AND name IN (SELECT * FROM cloned_repos)+      AND lower(name) IN (SELECT lower(name) FROM cloned_repos)

Do you think we should change the query?

Yeah. We should avoid the sequential scan, it's gonna be super slow on millions of rows.

asdine

comment created time in 23 days

Pull request review commentsourcegraph/sourcegraph

Fix cloned repositories listing and counting

 func (s DBStore) SetClonedRepos(ctx context.Context, repoNames ...string) error  const setClonedReposQueryFmtstr = ` -- source: cmd/repo-updater/repos/store.go:DBStore.SetClonedRepos-/*-This query generates a diff by selecting only-the repos that need to be updated.-Selected repos will have their cloned column reversed if-their cloned column is true but they are not in cloned_repos-or they are in cloned_repos but their cloned column is false-*/+--+-- This query generates a diff by selecting only+-- the repos that need to be updated.+-- Selected repos will have their cloned column reversed if+-- their cloned column is true but they are not in cloned_repos+-- or they are in cloned_repos but their cloned column is false.+-- WITH cloned_repos AS (-  SELECT jsonb_array_elements_text(%s)+  SELECT jsonb_array_elements_text(%s) AS name ), diff AS (   SELECT id,     cloned   FROM repo   WHERE     NOT cloned-      AND name IN (SELECT * FROM cloned_repos)+      AND lower(name) IN (SELECT lower(name) FROM cloned_repos)

The name column type is citext which stands for case insensitive text. Do we have to lower(name)? Check the documentation: https://www.postgresql.org/docs/9.6/citext.html

If we do have to do some lowering, can we do so that we use indexes appropriately?

asdine

comment created time in 23 days

issue commentsourcegraph/sourcegraph

RFC 196 Tracking Issue

@chayim: Here's what another RFC tracking issue looks like: https://github.com/sourcegraph/sourcegraph/issues/11526. You can still rely on the tracking-issue tool to keep this updated. You just have to create a label, say RFC-196, and then add that label to this tracking issue and to the issues it tracks. To read more about this: https://about.sourcegraph.com/handbook/engineering/tracking_issues#populating-and-maintaining-a-tracking-issue

chayim

comment created time in 23 days

pull request commentsourcegraph/sourcegraph

pings: RFC 197 (user accounting metrics)

Anything that would go beyond our timeout and would break pings. I think that's 60s but not sure.

ebrodymoore

comment created time in 24 days

pull request commentsourcegraph/sourcegraph

pings: RFC 197 (user accounting metrics)

@tsenart before we merge new event_logs analysis for pings, how should we validate/test performance characteristics? Is this something you can peek at and get a sense for risk? Just run this with some fake data on the order of sg.com's count of rows in event_logs to see how it looks?

I'd just run the query against the sourcegraph.com database via psql or another client and look at its performance first (i.e. total run-time). If it's slow then we could EXPLAIN ANALYZE it with https://flame-explain.com/

ebrodymoore

comment created time in 24 days

more