User Details
- User Since
- Oct 7 2014, 4:49 PM (510 w, 5 d)
- Availability
- Available
- LDAP User
- EBernhardson
- MediaWiki User
- EBernhardson (WMF) [ Global Accounts ]
Fri, Jul 19
The code has been deployed (but not loaded) to the beta cluster since Idea47a43d9fb.
Wed, Jul 17
I've been looking over the related code and pondering what all could potentially go wrong with the practical short term solution.
Tue, Jul 16
Mon, Jul 15
Fri, Jul 12
AFAICT this has been deployed. The currently deployed version contains both patches above, and our helmfile configuration sets the new http-rate-limit-per-second to 600.
If i understand this correctly, it needs to be delayed by a few weeks. Metrics started getting recorded into prometheus with this weeks train deploy, we likely want to wait for a couple (2?) weeks to transition the graphs so that the graphs still contain the information they contain today. The metrics from StatsdDataFactory shouldn't be removed until after the transition.
Mon, Jul 8
At a technical level whats happening is:
A few options we might consider:
Jun 14 2024
For the alerts, my best guess would be we are not setting the contactgroups hieradata variable.
Elastic2099 is still down, refusing to be started from DRAC. From the logs on elastic2088:
Jun 13 2024
We now have a flexible way to define if throttling should be enabled based on the presence or absense of various http headers and need to define what headers will be used. One concern with an X-Disable-Throttling header is we need some way to ensure that the header cannot be provided by arbitrary users, or we need to start getting more complicated with passing secrets around to ensure only requests with the secret token can disable throttling.
Jun 12 2024
Jun 11 2024
The second option, making throttling conditional on X-BIGDATA-READ-ONLY makes sense to me. It's perhaps a little awkward to make generic and document, but shouldn't be too bad.
The derived dataset the fulltext abandonment comes from is discovery.search_satisfaction_metrics. This should be readable by anyone in the privatedata group. The related code definition for fulltext abandonment is found in search_satisfaction_metrics.py. Essentially the recorded events indicate on a per-session basis how many result pages they saw, and how many pages they visited. If they saw a search result page and visited no pages we considered that abandonment.
Verified on the Wikikube Apache 2 accesslog dashboard that our requests have transitioned from the apache user agent to an application specific useragent. For the moment the user agents are as follows. These may change in the near future, as the user agent will also be used for rate limiting and we have to decide the appropriate granularity.
Jun 5 2024
One awkward point, I kinda abused SUP not failing on unknown config properies to put the list of indexes being backfilled into the chart. It's used so that if the orchestration crashes it can fetch the configmap on startup and understand the backfill from only the deployed charts.
Jun 4 2024
Jun 3 2024
Process has completed for eqiad and codfw. Total runtime was 3.5 days in codfw, 5.5 days in eqiad. The difference in time is mostly accounted for by commonswiki failing after more than a day and retrying. Based on review of this run I'm going to update the repository to work on a per-index basis instead of a per-wiki basis. This should reduce the effect of retries on large wikis, and also avoid a problem we see in the current process where commonswiki_content finishes reindexing, but then doesn't get backfilled for a day waiting for commonswiki_file to reindex.
May 31 2024
May 30 2024
May 28 2024
I've been working out a new reindexing orchestration for this, found in https://gitlab.wikimedia.org/repos/search-platform/cirrus-reindex-orchestrator/. It has run to completion now on cloudelastic, finishing in under a week (compared to ~3 weeks last time). A review of the logs and the set of live indices in cloudelastic looks like this has been succesfull. Making a few more cleanups to the codebase, and then will start reindexing eqiad and codfw.
May 21 2024
May 16 2024
Went through and made some test charts in superset against my test tables I generated with the live data. It looks like we have everything we need, but I'm going to make one change to the collection scripts to simplify things.
May 14 2024
May 13 2024
Security would be interested in us investigating the access control mechanisms in opensearch, having access more limited than "anyone with a network connection in the cluster".
May 8 2024
May 6 2024
The four sub-tickets were combined into a single gitlab MR with two calculations, and found in: https://gitlab.wikimedia.org/repos/search-platform/notebooks/-/merge_requests/3. These currently populated two daily partitioned hive tables and i've filled them with data for the months of march and april. Expecting that going forward we will want to move the metrics calculation to airflow, and decide on which metrics are worth dashboarding.
Four tickets were combined into a single ticket, two calculations, and found in the patch above:
Four tickets were combined into a single ticket, two calculations, and found in the patch above:
Four tickets were combined into a single ticket, two calculations, and found in the patch above:
Four tickets were combined into a single ticket, two calculations, and found in the patch above:
May 3 2024
I've worked through most of this and have it calculating up the last two months of metrics now. They will be found, for now, in ebernhardson.T358350 in hive.
May 1 2024
The general issue here, missing search suggestions, is resolved and the temporary mitigations put in place have been rolled back. I'm calling this issue done. One of the root causes, network connectivity, has been resolved. The other root cause, promoting a bad index, is tracked in T363521. Some changes have already been put in place to make this code more resilient to network failures, but more might still me done.
Apr 30 2024
Apr 26 2024
The consumer seems generally stable. It involved changes to both the application for better error handling, and an increase in the taskmanager memory above. The pods had been running for a week uninterrupted until we brought them down yesterday to verify some new alerting.
Poked at the data-engineering-alerts archive, it looks like these were firing daily and then stopped on Apr 10. I think we can optimistically call this fixed?
per the data-engineering-alerts list archive these were triggering daily alerts the two weeks prior to 2024-04-10 and haven't been emitted since. This is two days after the fix was applied, which is slightly curious. But I remember something about event refining operating over window of hours, so maybe it took some time to pass. I'm willing to call this complete with the errors stopping.
Root cause of the network issue has been tracked down in T363516#9748908, A layer-2 issue with LVS and new racks. With that fixed this error should be triggered less frequently, but we should still apply some resiliency updates to the related code.
decided to delay bringing traffic back to eqiad until monday. To be confident in the daily indices we would probably want to rebuild them all, but that takes many hours and it would finish only a few hours before I'm heading out for the weekend. Didn't seem like a great time to bring traffic back. The daily rebuilds will run, we can look at them on monday and bring traffic back if everything is back to normal.
I poked around a little, but I'm not sure how to check if that fix solved the issue or not. I submitted a join request to the data-enginering-alerts mailing list, can check archives for current frequency after being accepted. I assume these alerts are also recorded by whatever sends them, but i wasn't sure where that is.
These look to have subsided, now 12 in the last 4 days.
Apr 25 2024
One thing we do have in logstash, although not specifically from the script running eqiad, is a surprising (to me) number of general network errors talking to the elasticsearch cluster. Looking at the Host overview dashboard for mwmaint1002 for today can see that there were intermittent network errors from 03:00 until 06:50. Our completion indices build ran from 02:30 to 6:45. Looking at the last 7 days there are consistently network errors during this time period. I'm assuming we are causing those, but we could try running it at a different time of day.
Started looking over this the other day. Some data we have available:
Wrote a terrible bash script to compare titlesuggest doc counts between the two clusters. This suggests the problem isn't limited to enwiki
Decided against shuffling traffic, rebuild is almost compete already for enwiki. I can see in the logs where the enwiki eqiad build jumped from 44% to complete, but no reason why. nothing in logstash for that period either. I've created T363521 to put something in place to prevent this in the future.
hmm, i can confirm this is happening. The completion index is built new every day in each datacenter. Usually they are the same, but somehow the eqiad index is about half the size of the codfw index (6.7g vs 14.5g). Auto complete is fairly high traffic, we should probably shift the autocomplete traffic to codfw until it can be fixed which probably requires a rebuild and a couple hours.
Apr 24 2024
Apr 19 2024
Apr 18 2024
This chart should (eventually) contain the same data as gehel posted above. As of this moment only 5 days are calculated but the aggregate % have already settled in. I only spent a couple minutes to make the chart, this probably isn't the best way to present the data. But an example: https://superset.wikimedia.org/explore/?slice_id=3368
Apr 17 2024
One potential improvement we talked about, the initial method of configuring the saneitizer adds new pieces to the flink execution graph. This means you have to play around with some dangerous options to pause saneitization, losing the current saneitization state in the process. We should update the operation of the flag to toggle saneitization so that it still connects to the graph, but never emits any events or state changes when disabled. The general idea is that the shape of the graph should not change due to configuration changes, as graph shape changes require careful deployments.
Iniital deployment has been a bit rocky, in particular saneitizer is visiting pages with error states we haven't seen in normal operation yet. Overall this is probably good, we would have run into pages with these error states eventually. Saneitizer is simply speeding that process up. The pipeline has been running for a couple hours now without issues,. If it's still running without restarts by tomorrow we can probably consider the initial deployment complete.
Apr 16 2024
This looks to be all caught back up from our side
Apr 15 2024
All indices on cloudelastic look to be recreated now as well. It hasn't been running this whole time, it just took me awhile to get around to verifying the operation and finishing the couple wikis that failed the first two times through.
it was backfilling over the weekend but got stuck around feb 6th. It's back to processing hourlies, i expect they will keep decreasing for at least 12 more hours of processing based on the current rates, as long as it doesn't get stuck again. Basically what happened is there is a daily cleanup for old data, and because this is backfilling old data the bits it calculated were deleted in the middle of it working, and it stopped. I've paused the cleanup process for now until it completes.
Apr 12 2024
they are stored and processing through now at a rate of something like one hour per minute. It should catchup soon enough.
Hmm, indeed it looks like hourly transfers have been stuck for quite some time. Somehow airflow thinks there are two hours running and it never failed them. It is still waiting for them to complet even though nothing is running. It looks like we never set an SLA value on this dag, so it's failures probably don't get properly recognized. I've reset the two two tasks that were stuck and will see how i can get these all moving again, along with adding an sla so it properly alerts.