profile
viewpoint
Jonas Hörsch coroa Berlin, Germany

chaoflow/metachao 3

metachao strives to become the sacred chao of python metaprogramming

chaoflow/nixpkgs 1

Nix Packages collection

chaoflow/tpv.nix 1

Connect the vortex to nix' facilities (hydra/store/...)

coroa/emacs-jabber 1

fork of git://emacs-jabber.git.sourceforge.net/gitroot/emacs-jabber/emacs-jabber

coroa/global_energy_observatory_power_plants 1

Global Energy Observatory Power Plants

chaoflow/plumbum 0

Plumbum: Shell Combinators

chaoflow/tpv.cli 0

Command-line interfaces for tpv applications

push eventinvenia/IndexedDims.jl

Fernando Chorney

commit sha 274f5fd3cf8338839dc063aec09a01309f8de195

Automatically fix CI tests and cache steps

view details

Fernando Chorney

commit sha a83e0d0fc79f2d2e05cab5d456e4395741876ca8

Merge pull request #28 from invenia/fc/auto-ci-fix Fix CI Tests and Artifact Cache Issues

view details

push time in 2 hours

PR merged invenia/IndexedDims.jl

Fix CI Tests and Artifact Cache Issues

Use CLI script to automatically replace the 1.3 test with a 1.5 test (if necessary), as well as use new cache steps to cache the julia build artifacts

+9 -10

1 comment

2 changed files

fchorney

pr closed time in 2 hours

created tagPyPSA/atlite

tagv0.2.0

Atlite: Light-weight version of Aarhus RE Atlas for converting weather data to power systems data

created time in 3 hours

push eventPyPSA/atlite

Fabian

commit sha fb5ad26f296273cb4ea6ed813aa2a25b2f3a5a6f

follow up III

view details

push time in 3 hours

push eventPyPSA/atlite

Fabian

commit sha 35a46b7e972cd968d160957a987a7037e8c58fef

follow up II

view details

push time in 6 hours

push eventPyPSA/atlite

Fabian

commit sha 1506151428d8e4e9a22ed8994ff2ccea184618fa

follow up

view details

push time in 7 hours

push eventPyPSA/atlite

Fabian

commit sha 645090e8eec88be3b8d6f46d766e0e26cad232c1

no content change, trigger tagged commit

view details

push time in 9 hours

push eventPyPSA/atlite

Fabian

commit sha d947f1ea5eb4a5565d720fe1fe18e43b4fde5b7b

fix readme rst code add rst text type to setup.py

view details

push time in 10 hours

push eventPyPSA/atlite

Fabian

commit sha e3a517d23fc2c491feb410f61194a84ba14a99f6

DOC: update release notes and some function docstrings

view details

push time in 11 hours

created tagPyPSA/atlite

tagv0.2

Atlite: Light-weight version of Aarhus RE Atlas for converting weather data to power systems data

created time in a day

push eventPyPSA/atlite

Fabian

commit sha fb0403fbb392aa0819b9ddba65c76fb715850c72

follow up II

view details

Fabian

commit sha 216c4e29e2f60a8dc3d90c8207608968fc951e8b

follow III

view details

Fabian

commit sha 3fbeb8e8abd4bc59cb35821d6ca1999172acb133

resize chart

view details

push time in a day

push eventPyPSA/atlite

Fabian

commit sha 3fbeb8e8abd4bc59cb35821d6ca1999172acb133

resize chart

view details

push time in a day

push eventPyPSA/atlite

Fabian

commit sha 216c4e29e2f60a8dc3d90c8207608968fc951e8b

follow III

view details

push time in a day

create barnchPyPSA/atlite

branch : readme-chart

created branch time in a day

push eventPyPSA/atlite

Fabian

commit sha 2e243d05319eea07fa24f0c175142b0a75890e5a

follow up, fix pdf

view details

push time in a day

push eventPyPSA/atlite

Fabian

commit sha 5fbb135c61fc6f4c7c40cc1b99158524be50af64

README update workflow chart

view details

push time in a day

pull request commentPyPSA/atlite

Preparation of v0.2

finally :)

coroa

comment created time in a day

push eventPyPSA/atlite

Jonas Hörsch

commit sha 12846ce6819b6756a8377ad0ec224d80b4cafb89

Preparation of v0.2 (#20) * documentation: Update index. * documentation: Add TODOs. * documentation: Add 'Examples' category. * resource.py add for accessing solarpanels and windturbines utlis.py add class arrowdict * Update configuration.rst * documentation: Add user guide and api ref. * Update README.md * documentation: Auto API reference documentation. * documentation: Structural updates. * doc: api reference: bug fixes, add structure * Delete os.path Accidentally uploaded. * example: Create cutout notebook. * documentation: Add release notes for v0.2. * doc enable napoleon extension * convert.py docstring update * wind.py update resource.turbines when downloading_turbineconf * fix strings, exclude '.yaml' * documentation: Update (WIP) * Licensing: Add copyright and license to all files (REUSE). * wind.py import resource.turbines in function due to import errror otherwise * Adjust documentation config for copyright and project name. * instructions.rst add images and 'how it works' description basics.rst distribute among other chapters * Update atlite-doc environment file with recommended environment dependencies. * documentation: Update SPHINX conf.py strings and automatic version number. * doc/cutout.rst update text and add commands convert.py small docstring corrections * delete basics.rst * doc/user-guide continue work * update environment_docs.yaml further writing in doc/user-guide * temporarly comment nbextension due to extensive memory consumption * reduce environment_doc.yaml dependencies radically (try out if it still works) * fix missing space in environment_docs.yaml * convert: Substitute deprecated cutout.meta with cutout.data. (#29) * Fix typo in README.md link to contributors. * doc/contributing.rst: Fix typo in heading. * __init__.py: Include new package docstring. * setup.py: Include new description and authors. * utils: Provide detailed re-creation information and catch error (fixes #33) * Move download_turbineconf to resource module To avoid circular dependencies. * config: Add _update_hooks list for registering for config updates * resource: Update resource dictionaries on config change * __init__.py load turbines and panel from resource.py wind.py remove unnecessary imports resource.py fix missing imports, use _update_resource_dict in download_windturbineconfig ensure replacing '-' in wind_config strings * Add minimal SPHINX Makefile. * environment_docs.yaml: try again with nbshinx extension, include other packages as required * Update names to windturbines and solarpanels for consistency * environment_docs.yaml comment nbsphinx installation again * configuration.rst: Update doc. Fixes #44 changse in documentation. * config: Allow more formats for configuration updates. (#44) * doc/makefile bugfix environment_doc test pypi installations due to memory efficiency doc/* small continuations * environment_docs another try with pyyaml in dependencies * environment_docs move back to conda, but exclude unnecessary packages * environment_docs build failed again (move nbsphinx into pip installation) * environment_docs commenting out nbsphinx due to memory error (again!) * Try to install "nbsphinx" in RTD with pip instead of conda. * Try different configuration for RTD. * Add pyyaml as extra required and RTD config cleanup * setup.py: Include toolz for doc installation. * Disable system_packages in RTD config. * setup.py: Include python-dateutil for doc install. * Substitute deprecated option in autodoc. * setup.py move missing packages to install_requires * Try RTD installation with conda/pip mixture again. * Change RTD config for installing package. * Remove local dir from RTD installation environment. * Change RTD environment to use fixed versions for lower RAM requirements. * Fix package version numbers in RTD environment. * Switch back to RST and remove duplications by softlinking. * MD does not allow for RST include directives making the switch necessary * By using the includes, we remove duplications in author listings in the documentation and README.rst and have the chance to have a separate AUTHORS.rst file * The release notes are also no longer duplicated and have a separate file in the root directory. An include-link brings them into the documentation. * Replace README.md with README.rst. * Prepare RTD for displaying and linking Jupyter notebooks. * Provide examples in documentation from subfolder via link to repo root. * Provide examples in documentation subfolder via link to repo root. (2) Examples which where missing from the commit before #72c0eda . * Disable nbsphinx cell execution in SPHINX for empty notebooks. Workaround for execution errors (do not execute noetbooks. Any output wanted has to be generated locally and uploaded). * setup.py: Fix wrong README import (.md instead of .rst). * Update RTD environment.yaml for nbsphinx. * Update formatting in example create_cutout.ipynb. * Rename Logfiles_and_messages.ipynb to logfiles_and_messages.ipynb Windows - Linux problem: Now solved. * Include a note on contributing examples in notebook format. * Try Jinja2 header for linking the doc with the notebook files. * Add how-to warnings to example and spell checking. * Add end-of-string to fix RTD build. * Try another preamble for the RTD Jinja2 filter. * Update RTD environment for improved performance. * Minor changes in the documentation on contributing. * Update RTD conf.py trying to fix nbsphinx_prolog / Jinja2 recipe. * Update RTDs conf.py to get correct path to examples in the repository. * Update RTDs conf.py to get correct path to examples in the repository (try 2). * Update RTDs conf.py to get correct path to examples in the repository (try 3). * Update RTDs conf.py to get correct path to examples in the repository (last try). * utils: Re-raise MergeError on unsuccessful automatic migration (#33) * Update module docstrings and update Cutout class docstring. * examples: add notebook fot plotting * examples and doc: make plotting_nb visible in docs * doc/examples create nb_link for plotting_with_atlite.ipynb * Update plotting_with_atlite.ipynb * Autoformatting (autoformat pep8) * Spell checks * Axis labels * Add note on descartes * Add our basic logging recommendation * era5: Move sanitation into dataset preparal * convert: Refactor from direct access to high-level functions * setup: Add requests dependency * sarah: Port atlite.datasets.sarah to new framework sarah: Update for data retrieval update sarah: Fix imports sarah: Fix _get_filenames sarah: Fix get_coords sarah: Add static_features sarah: Fix imports sarah: Fix get_data_era5 sarah: Add debug messages sarah: Fix get_data sarah: Fix _get_filenames sarah: Fixup implementation * era5: Fix handling of periods * era5: Create and clean up separate tmpdir for intermediate downloads The finalizer deletes files too early in a distributed setting, where the file handles are pickled and restored. * era5: Use gebco_path from config instead * data: Only download a single day of data if that is all we need * gis: Fix default crs setting rio.warp.reproject fails when using 'latlong', so we use the epsg number instead * data: Clean up tmpdir on exception * data: Add progress bar for writing to disk * sarah: Use a half-day chunk Leading to a memory use of about 2.2GB for a cutout the size of Europe. * datasets.common: Don't show the cdsapi download progress * sarah: Fix interpolation for newer dask versions * Update make.bat for windows to correspond to current minimal SPHINX file. * Update dependencies toolz is required by dask. * Fixes PyPSA/atlite#46 . * Unset unused directories for gebco, ncep, sarah, cordex. * Provide error message for unsuccessfull gdalwarp subprocess calls. * Extends and partially reverses euronion@7a225ca . * Reverse default config. * Rather than checking for a set gebco_path, we check if the path specified exists. * cutout: Add missing construct_filepath for properly treating config paths. * Change configuration mechanism for using GEBCO height. * cutout: Work around binary wheel incompatibility of netCDF4 and rasterio Refer to https://github.com/pydata/xarray/issues/2535, https://github.com/rasterio/rasterio-wheels/issues/12 * Invert default ordering in latitude dimension The default array ordering traverses now from small latitudes to large latitudes, since this is how ERA-5 organises its data by default and it lead to non-intuitive confusion several times. The most important changes are: - One now specifies cutout bound slices always from small to large: atlite.Cutout("bla", x=slice(-10, 10), y=slice(40, 45), time="2012-01") - Plotting using imshow has to be inverted explicitly from now on capacity_factor = cutout.wind(turbine="Vestas...", capacity_factor=True) plt.imshow(capacity_factor.transpose('y', 'x').values[::-1], extent=cutout.extent) We encourage users to use xarrays plotting facilities, instead, capacity_factor.plot() * Ensure temporary files are released before deleting (fixes #47) * Open cutout files with cache=False for saving memory Caching is still possible with `cutout.data = cutout.data.load()`. * convert: Make convert_wind fully dask-compatible * datasets.era5: Support requesting interpolated datasets * config: Store cutouts by default in the current directory * Remove cordex config settings CORDEX is available from CDS. * pv: Add simple latitude orientation With azimuth=180, it is angled to the south on the northern hemisphere and to the north on the southern hemisphere (due to the negative values for slope) * era5: Enable creating global cutouts One needs to use: x=slice(None), y=slice(None) * Revert "era5: Enable creating global cutouts" This reverts commit 3e7b4345c945a094593e6041e74da56d231a12a3. * utils: Prevent deprecation warning from pkg_resources. Absolute paths will be deprecated in pkg_resources.resource_filename. Including a leading (back-)slash is recognised as such and raises a warning, this change fixes this behaviour. * example plotting: Include figures, update to match v0.2 and misc. * Now including the figures for the example to be displayed in the doc. * Update the plotting commands to xarray's plot() to match the new v0.2 reverse indexing order * Misc: Show warnings and use the logger. * Add new example: Historic comparison for Germany (PV, wind in 2012). Example is based on the old "openmod-atlite-de.ipynb" example. * data: Fix no-missing-no-overwrite-branch (fixes #47) * utils: Correct error message for cutout migration. * On trying to automatically migrate an existing cutout, display the correct (new) order (min,max) for coordinates in the cutout recreation command. * Use the logging.error(...) facilities to correctly categorise the MergeError incl. the stack trace. * logging: Use .warning() API instead of deprecated .warn() calls. * utils: Only sort indices which can be in the wrong order * Remove config in favour of dataset parameters * sarah: Couple of small fixes and improvements * datasets.sarah: Make file searching more robust * setup.py: Update name and email * resource: Fix path of oedb wind_turbine_library in docstring * doc: Update cutout creation * Fix unmatched braket. * Update cutout creation example for ERA5 to new cutout signature. * convert: Fix the use of atlite.windturbines dict * gis: Do not pickle the spatial index Turns out the spatial index gets distorted by pickling, so we turn to_file and from_file to no-ops, until we find a work-around. * Update examples for new cutout signature. * Add documentation for cutout creation from SARAH-2 dataset. * Remove references to old configuration system and update index in doc. * Fix runoff conversion broken by #55ddd7f97c . * Add documentation for GEBCO in cutout creation. * Bump and versions in RTD environment to workaround compiling error. * examples: update plotting notebook * fix #62 * Fix cutout creation with odd bounds (era5) (#65) * fix #64 * ensure min/max assingments in coordinates * ensure correct ordering of slices * small fix * small fixup of #68 * use strtree instead of rtree * cutout.py: style, add line breaks to very long lines * Ensure backwards compatability in cutout creation. * fix literal and undefined name * Remove cutout_dir from cutout constructor main signature. * add test scripts * add test for loading and preparation * remove GridCell class * remove sindex references * change 'name' arg to 'path' * adjust test: delete temporary cutout files * add CachedAttribute decorator for property caching * fix sarah selector for file parsing * style only: break too long lines in [convert, data, sarah, gis, utils] * Add PR and issue templates. Co-authored-by: FabianHofmann <hofmann@fias.uni-frankfurt.de> * add conversion tests, fix small typo from previous commit * rename test script * Substitute deprecated .drop() by .drop_vars(). * test scripts: modify mktemp * add dx, dy, dt as properties to Cutout class * Dask compatibility (#77) * make interpolation optional * replace hourly_mean by resampling function * sarah module: apply autopep8 * fix typo * autopep8; move get_coords to high level * revise data.py, sarah.py * sarah use again as_slice * spread common.py among dataset modules * fix gebco as module * adjust test * change cutout representation * fix era5 static * fix features and allows multiple modules in cutout * fix feature preparation * sanitize prepare function, unify output of get_data * fix logging for requests * enable optional parallel loading * data.py: - remove literal_eval -> cutout_parameters are directly given to cutout.data.attr, slices are directly processes in coordinates creation - re-enable tmp_dir in cutout prepare - to_netcdf has different mode 'a'/'w' depending on whether file exists gebco.py: fix output of get_data sarah.py: set interpolate always to true * sarah.py re-enable optional interpolation * - add docstrings for all functions in data.py/sarah.py/era5.py - modify input variables of get_coords function. * add docstrings for all of cutout.py and gebco.py * cutout.py: - update docstrings for cutout class - remove support for cutou_dir, add warning and pointer to migration function - remove support for data argument as this requires further TODOs and can worked around very easily - remove default for module, this argument must be given - abolish is_view cases - add assertions for argument requirements 'x', 'y', 'time', 'module' when building new cutout * cutout.py: - Improve argument exception - Make cutout representation better - Ensure projection in cutout building * data.py make window class working with new feature handling tests: run tests for era5 and mixed ['sarah', 'era5'] cutouts cutout.py: reenable data as an optional argument * autopep8 in pv/*.py fix typo in migration function * first take for benchmarking: load data with chunks and apply conversion function without windows * cutout.py - clean imports - fully intergrate chunks as an cutout parameter and property - set chunking as standard loading of cutout data.py - use cutout.chunks property era5.py - use cutout.chunks property sarah.py - use cutout.chunks property * utils.py: fix import and swap_dimensions for old style cutout * pv module: - make module dask friendly, this commit removes all .values call which cause dask to be unable to chunk. - direct import of numpy functions which are often used convert.py: - restructure convert_and_aggregate function, this makes the function faster if only a layout is given. - change show_progress to bool only - change layout to be xr.DataArray only * convert: Fix heat demand hourshift for xarray 0.15.1 (#63) From xarray version 0.15.1, .values cannot be assigned. You should use the .assign_coords() method instead. See release notes for xarray 0.15.1 "breaking changes": http://xarray.pydata.org/en/stable/whats-new.html#v0-15-1-23-mar-2020 * convert.py - rename heatdemand array * cutout.py: - raise Error if old style * cutout.py: restructure handling of projection. The projection of different modules is tested when initializing the cutout. The property 'projection' will then only look at the projection of first module. * aggregate.py: remove aggregate_sum function convert.py: - try out dense operation for indicator matrix multiplication - replace aggregate_matrix function with tensor dot (still figuring out performance) gis.py: - argument shapes can now also be geopandas frame * convert.py: reinstatiate aggregate_matrix function, but fix name of index * convert.py: - make index valid for geopandas series and frame * Allow saving of pseudo-boolean cutout attributes to netCDF. * Fix interpolation option and defaults for SARAH cutouts. * convert.py fix division for capacity factor calculation cutout.py fix represention of features * Rename cutout parameters for interpolation of SARAH data. * review examples: - clean and run create_cutout and plotting notebook - delete tiny create_cutout.py script - add comparison script for verions 1 and 2 * Apply suggestions from code review Co-authored-by: Jonas Hörsch <jonas.hoersch@posteo.de> * add suggestions: - aggregate.py ensure index name - convert.py updat docstring - convert.py remove index.name defaults as done by aggregate_matrix function - convert.py reenable progrss bar for hydro - cutout.py fix import structure - data.py remove unneeded code - sarah.py add assertion for time resolution * data.py store booleans as int for to_netcdf * prepare cutout: move prepared_feature assignment directly before storing * update authors list fix suggestions in migration function * revise all imports * * change chunk size attribute from 'chunk_{dim}' to 'chunksize_{}' * fix cutout.prepared_features for one feature only Co-authored-by: Tom Brown <tom.brown@kit.edu> Co-authored-by: euronion <42553970+euronion@users.noreply.github.com> Co-authored-by: Jonas Hörsch <jonas.hoersch@posteo.de> * aggregate.py do not immute index with no name * examples: rerun and comment * example: historic reanalysis notebook, add head comment, fix headings * examples: rerun sarah notebook * doc: - remove User Guide section in favour of notebooks - add notebook for gebco heightmap cutout.py & data.py: small fix for cutout with spatial dimension only * tests: Use pytest fixtures and marks (#83) * data: Re-enable parallelized queueing (#87) * era5: Use lock to support download progress bars * era5: Fix retrieval_times * era5: Re-enable warnings * era5: Fix int64 is not serializable to JSON exception * data: Re-enable parallelized queueing Since queueing times are significantly longer than downloading, combine dask.delayed with a lock to queue in parallel but download in series. * Document breaking changes, user warning due to change in cutout index order. * fix updating features & sanitize static variables in era5 (#91) * era5.py - set output of get_data to xr.Dataset - sanitize static features, whereas the variable should not have a time dimension, the returned dataset of get_data should have data.py - parallize feature loading -> delayed call of get_data * era5.py - set delete file message to logger.debug * era5.py fix typo * era5.py - bug fix time keys * data.py: - store data file without appending to netcdf as it is unsecure, instead create a new one era5.py - revert time assigning to static feature as not needed test - add test for updating a variable * data.py - fix name of temporary file era5.py - stream info log of client through logger.debug * Update atlite/data.py Co-authored-by: euronion <42553970+euronion@users.noreply.github.com> * reenable progress bar * era5: Logging differentiates between request and download (#94) * cutou.py: - selection must have a separate name data.py: - make tempfile creation safe - logging for tmp dir test - adjust test for selection era5.py - fix division by zero * test - use fixture for updatable cutout * era5.py remove sleep time again cutout.py set chunks to 100 as it is faster Co-authored-by: euronion <42553970+euronion@users.noreply.github.com> Co-authored-by: Jonas Hörsch <jonas.hoersch@posteo.de> * [v0.2] review suggestions (#95) * cutout: Make new `path` argument optional * cutout: Make dx/dy calculation more robust * data: Do not leak the temporary directory * bugfixes of PR #95 * update README.rst * Fixup README * Fixup README II * add workflow chart * README: make workflow image look better * test remove dummy test * AUTHORS.rst fix table * README.rst fix inclusion of authors list * README.rst fix authors list II * Fix link to authors in README.rst. * README.rst reintegrate Installation section * data.py fix return_capacity with layout not None (#97) * era5.py disable warning for ds.drop() test: temporarly disable windows machines * Fix divisions by zero (#98) * solve #55 * fixup * era5.py remove errstate with block as obsolete * era5-module: tiny fix string formatting * data.py: fixup for no features given * cutout.py: add 'module' to __repr__ * data.py tiny code style fix * aggregate.py use dask_gufunc_kwargs * Ci mamba (#110) * ci: use mamba instead of conda * follow up, add comment [skip travis] * follow up * follow up, fix conda activate * ci: playaround, remove conda specifications * fix numpy <-> daks incompatibility in clip function (#111) * irradiation.py: replace .clip by .where due to new numpy/dask incompatibility * follow up, only apply .where where necessary * travis: do not fix mamba version * Windows machine fix (#109) * enable ci on windows * data.py: use TemporaryDirectory instead of mkdtemp * data.py revert last commit, try now with wrapper * fix travis env for windows machines * follow up: write pip and pytest dependencies in env file * env: add libspatialindex to requirements * travis: reintroduce strict channel order due to installation problems on windows * Cutout grid (#112) * introduce Cutout.grid make Cutout.grid_cells and Cutout.grid_coordinates deprecated * follow up * adjust plotting example * update release notes * test_creation.py adjust test * test: tiny fix up * add crs to Cutout.grid * follow up: add comment [skip travis] * release notes: fix typo [skip travis] * Update to pyproj 2 (#92) * Rename projection to crs Follows pyproj in nomenclature. See https://pyproj4.github.io/pyproj/stable/gotchas.html#upgrading-to-pyproj-2-from-pyproj-1 . * environment: Remove channel pinning Channel pinning has been superseed by strict channel_priority as proposed at https://conda-forge.org/docs/user/tipsandtricks.html. * gis: Add grid_cell_areas function to compute areas of grid cells * cutout: Fix forgotten conversion * gis: Improve grid_cell_areas * remove area calculation due to geopandas implementation * update release notes * gis.py: revise imports Co-authored-by: Fabian <fab.hof@gmx.de> * gebco: Extract and resample data from GEBCO using rasterio [DNMY] (#93) * gebco: Extract and resample data from GEBCO using rasterio * tiny fixup of inversed y-axis and data array accessing * fix numeric tags Co-authored-by: Fabian <fab.hof@gmx.de> * use 'cea' projection instead of 'aea' (alternative to #102) * Modified cutout creation (#114) * * add warning for ignoring cutoutparams if cutout already exists * reintroduce Cutout.prepared * follow up * gis.py: adjust deprecation warning * data.py set "feature" attribute for every variable (#115) cutout.py make prepared features more secure * Cutout merge (#116) * cutout.py add merge function pytest add merge test * cutout.py: when data is passed and path is non-existent, write out file path in cutout.merge and cutout.sel has to be non-existent * adjust docstrings * revert second last commit, add cutou.to_file function * revert unneeded assert * follow up: update docstrings [skip travis] * irradiation.py: add comment to clipping, use other approach solar_position.py: saver/cleaner approach for chunking * utils.py: track module and feature when migrating (#117) * Convert fix (#118) * convert.py catch case of no layout given * convert.py: restructure convert_and_aggregate for correctly handling all input combinations * test: pv add rounding to assert * convert.py secure division by zero * update examples * update years and authors Co-authored-by: euronion <42553970+euronion@users.noreply.github.com> Co-authored-by: Fabian <fab.hof@gmx.de> Co-authored-by: FabianHofmann <hofmann@fias.uni-frankfurt.de> Co-authored-by: Tom Brown <tom.brown@kit.edu>

view details

push time in a day

PR merged PyPSA/atlite

Reviewers
Preparation of v0.2

The first official version of atlite is getting closer and there is a bunch of good stuff coming.

Changes

The main change is that a cutout corresponds to a single netcdf file for the whole cutout period, which is fully accessible as a xarray at cutout.data.

  1. This makes it possible to iterate over the data in customizable slices: cutout.wind(shapes=countries, turbine="Vestas_V90_3MW") uses months as in the previous version, while f.ex. cutout.wind(..., windows='Y') uses years. windows can be anything that pd.Grouper understands, ie. 'D', 'M', 'Y' or even '2D'. windows = False makes it possible to apply the conversion function to the data as a whole. It should also be possible to choose windows compatible with a particular time zone to avoid the re-averaging that was necessary for heat_demand.

  2. The data for cutouts is now grouped into different features (from the ERA-5 dataset):

    features = {
        'height': ['height'],
        'wind': ['wnd100m', 'roughness'],
        'influx': ['influx_toa', 'influx_direct', 'influx_diffuse', 'influx', 'albedo'],
        'temperature': ['temperature', 'soil_temperature'],
        'runoff': ['runoff']
    }
    

    It's possible to prepare a cutout only for a subset of the available features: cutout.prepare(['runoff', 'wind']). One can always extend the cutout by running prepare again.

  3. One can load a cutout fully into memory using cutout.data.load() (or cutout.data.wnd100m.load(), cutout.data.roughness.load() for wind only), which should fully supersede @euronion 's caching from #9 .

  4. It's easy to get a subset of a cutout using .sel directly induced by atlite's sel function: cutout.sel(time="2012-01") or cutout.sel(time="2012-07", bounds=german_shape.bounds)

Open questions

  • [x] config.py has been completely removed instead one has to provide the necessary paths explicitly when creating new cutouts. In addition we could allow reading in a config file like ~/.atlite.config or some such?
  • [x] Should data cleaning methods be moved into datasets (ie surface roughness <= 0. ->0.002)? I think that would be a good idea! Are there objections?
  • [x] When data is read in as dask arrays, it is not mutable in the conversion functions leading to Exceptions. We can either change everything to copy-on-change (ie use clip) or catch the error and throw a more helpful error message to have to prepare the dataset? Related to #30. To be conservative we load dask arrays before they are passed to the conversion functions.

Remaining TODOS

  • [X] Add a .sel method to produce a view on a subset of the data
  • [ ] Mailing list for announcements
  • [x] Release notes (done in documentation branch)
  • [ ] Migration instructions
  • [x] Examples should set up for showing warnings: #27 . (done in documentation branch)
  • [ ] Other datasets:
    • [x] sarah
    • [ ] cordex
    • [ ] ncep
    • [ ] efas
  • [x] Merge documentation branch

I'm happy about everyone, who wants to test the new version, provide feedback or help with documentation or the remaining todos! @leonsn @nworbmot @FabianHofmann @schlott @fneum

+11707 -2313

32 comments

85 changed files

coroa

pr closed time in a day

push eventPyPSA/atlite

Fabian

commit sha d07be82ef48fe26e30cbe126c1dd4d0c81fab7f9

update examples

view details

Fabian

commit sha 6a012147d609301bf9299b62f6e1eaa9289ab7d9

update years and authors

view details

push time in a day

issue closedPyPSA/whobs-server

Allow upper and lower bounds on capacity for each technology

Would also allow to fix the capacity of particular technologies.

closed time in a day

nworbmot

issue commentPyPSA/whobs-server

Allow upper and lower bounds on capacity for each technology

Done in commit https://github.com/PyPSA/whobs-server/commit/6feaf1a4666ffc1be5bedbb3f1321349dc4bc598.

nworbmot

comment created time in a day

issue commentPyPSA/whobs-server

Real World demand data input

It just needs to be programmed! See https://github.com/PyPSA/whobs-server/issues/8.

freechelmi

comment created time in a day

CommitCommentEvent

push eventconda-forge/atlite-feedstock

cf-blacksmithy

commit sha 06ba163a13e7b0aa20051127bdc82e3b70bac98f

[ci skip] [skip ci] [cf admin skip] ***NO_CI*** admin migration CFEP13TokenCleanup

view details

push time in a day

issue commentconda-forge/atlite-feedstock

@conda-forge-admin, please add bot automerge

Hi! This is the friendly automated conda-forge-webservice.

I just wanted to let you know that I added bot automerge in conda-forge/atlite-feedstock#7.

coroa

comment created time in a day

PR opened conda-forge/atlite-feedstock

Reviewers
[ci skip] [cf admin skip] ***NO_CI*** adding bot automerge

Hi! This is the friendly automated conda-forge-webservice.

I've added bot automerge as instructed in #6.

Merge this PR to enable bot automerging.

Fixes #6

+2 -0

0 comment

1 changed file

pr created time in a day

pull request commentconda-forge/atlite-feedstock

Add Fabian Hoffmann as fellow recipe maintainer

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

coroa

comment created time in a day

issue commentPyPSA/pypsa-eur-sec

Documentation/advice regarding choice of workflow system (snakemake vs. pachyderm)

@nworbmot Thank you for the response. I decided to move forward with Apache Airflow and PySpark, though I am fascinated with snakemake and may use it on a future project. I decided against pachyderm b/c it looks like a vendor-locked ecosystem, and the most compelling feature, provenance, seems to reinforce that vendor-lock.

ToddG

comment created time in 2 days

PR opened PyPSA/pypsa-eur

simplify: delete bus columns with incorrect entries

dead-end busses with possibly different substation_lv, substation_off and under_construction values are being aggregated, but the dead-end bus is simply removed without updating the respective attribute of the bus it is aggregated to.

since we don't need those attributes (they are non-existent after cluster_network), I suggest to drop them to avoid confusion

+1 -0

0 comment

1 changed file

pr created time in 2 days

more