profile
viewpoint

deltachat/deltachat-core-rust 202

Delta Chat Rust Core library, used by Android/iOS/desktop apps and bindings

mmoya/ansible-playbooks 55

Example playbooks for Ansible

merchise/fortaleza 11

Fortaleza RPG from the nineties, Cuba

mmoya/ansible-commander 7

early development stage project -- REST api, server, plugins, ui, & cli client

formorer/pkg-libnetfilter-conntrack 1

Debian Packaging for libnetfilter-conntrack

mmoya/ansible-201 1

Slides for ansible workshop delivered in ShuttleCloud's office on 20160305

mmoya/audaciousremote 1

Audacious Remote for iPhone

mmoya/faircoin_exporter 1

Prometheus exporter for FairCoin CVN metrics

mmoya/msti-chatrpc 1

Simple chat using Sun's RPC protocol

mmoya/aioimaplib 0

Python asyncio IMAP4rev1 client library

Pull request review commentdeltachat/deltachat-core-rust

Don't download ranges of messages (i.e. first:last)

 fn get_fallback_folder(delimiter: &str) -> String {     format!("INBOX{}DeltaChat", delimiter) } +/// Builds a list of sequence/uid sets. The returned sets have each no more than around 1000+/// characters because according to https://tools.ietf.org/html/rfc2683#section-3.2.1.5+/// command lines should not be much more than 1000 chars (servers should allow at least 8000 chars)+fn build_sequence_sets(mut uids: Vec<u32>) -> Vec<String> {+    uids.sort_unstable();

I guess CPU time required to sort a few numbers is negligible.

Hocuri

comment created time in 6 hours

push eventdeltachat/deltachat-core-rust

B. Petersen

commit sha 4f836950bc7efd95370ea8612a7a820b4cb07998

split off image recoding to separate function

view details

B. Petersen

commit sha 7c15e4e9485fd602cbbd16e8667c8ceb4f0d8748

fix rotation for small images and avatars

view details

B. Petersen

commit sha 2720d345945c8027b54b871c3ea80ca05f8d4b3f

use split off image recoding function for avatars

view details

B. Petersen

commit sha 75d79dc79ca254923bfb910a136a26be81bf826a

use different avatar-sizes for different media_quality settings

view details

B. Petersen

commit sha 4ef2a7c8d746dbd0d6fa00d81c24a1f117a5e6b8

prefer exhaustive 'match' over 'if'

view details

Hocuri

commit sha f688c2e6edd5c591c9993fd03e37eee8baf2b2d5

Don't download ranges of messages (i.e. first:last)

view details

Hocuri

commit sha d82b2e21b8cc9bf52dbfa8da55b139dea8633bde

Always sort the UIDs

view details

push time in 7 hours

push eventdeltachat/deltachat-core-rust

Hocuri

commit sha 37e9a87a7faa1e9b0083c993bf3629242dc9fb8f

Fix another test: don't fetch from uid_next-1 but uid_next; make some {} to {:#} so that we can use `.context(...)`

view details

push time in 7 hours

push eventdeltachat/deltachat-core-rust

push time in 14 hours

Pull request review commentdeltachat/deltachat-core-rust

Re-enable Export to the new backup format, add backup progress, add a test for the backup progress

 async fn export_backup_inner(context: &Context, temp_path: &PathBuf) -> Result<(         .append_path_with_name(context.get_dbfile(), DBFILE_BACKUP_NAME)         .await?; -    context.emit_event(EventType::ImexProgress(500));--    builder-        .append_dir_all(BLOBS_BACKUP_NAME, context.get_blobdir())-        .await?;--    builder.finish().await?;-    Ok(())-}--async fn export_backup_old(context: &Context, dir: impl AsRef<Path>) -> Result<()> {-    // get a fine backup file name (the name includes the date so that multiple backup instances are possible)-    // FIXME: we should write to a temporary file first and rename it on success. this would guarantee the backup is complete.-    // let dest_path_filename = dc_get_next_backup_file(context, dir, res);-    let now = time();-    let dest_path_filename = get_next_backup_path_old(dir, now).await?;-    let dest_path_string = dest_path_filename.to_string_lossy().to_string();--    sql::housekeeping(context).await;--    context.sql.execute("VACUUM;", paramsv![]).await.ok();--    // we close the database during the copy of the dbfile-    context.sql.close().await;-    info!(-        context,-        "Backup '{}' to '{}'.",-        context.get_dbfile().display(),-        dest_path_filename.display(),-    );-    let copied = dc_copy_file(context, context.get_dbfile(), &dest_path_filename).await;-    context-        .sql-        .open(&context, &context.get_dbfile(), false)-        .await?;--    if !copied {-        bail!(-            "could not copy file from '{}' to '{}'",-            context.get_dbfile().display(),-            dest_path_string-        );-    }-    let dest_sql = Sql::new();-    dest_sql-        .open(context, &dest_path_filename, false)-        .await-        .with_context(|| format!("could not open exported database {}", dest_path_string))?;--    let res = match add_files_to_export(context, &dest_sql).await {-        Err(err) => {-            dc_delete_file(context, &dest_path_filename).await;-            error!(context, "backup failed: {}", err);-            Err(err)+    let read_dir: Vec<_> = fs::read_dir(context.get_blobdir()).await?.collect().await;+    let count = read_dir.len();+    let mut written_files = 0;++    for entry in read_dir.into_iter() {+        let entry = entry?;+        let name = entry.file_name();+        if !entry.file_type().await?.is_file() {+            warn!(+                context,+                "Export: Found dir entry {} that is not a file, ignoring",+                name.to_string_lossy()

Oops, no, it's not possible. for some reason this is an ffi::OsString that has a function display() but it's private.

Hocuri

comment created time in 14 hours

push eventdeltachat/deltachat-core-rust

Hocuri

commit sha 3542571849ec51a5a8efcc54c268622a70446e33

Use name.display instead of name.to_string_lossy

view details

push time in 14 hours

Pull request review commentdeltachat/deltachat-core-rust

Re-enable Export to the new backup format, add backup progress, add a test for the backup progress

 async fn export_backup_inner(context: &Context, temp_path: &PathBuf) -> Result<(         .append_path_with_name(context.get_dbfile(), DBFILE_BACKUP_NAME)         .await?; -    context.emit_event(EventType::ImexProgress(500));--    builder-        .append_dir_all(BLOBS_BACKUP_NAME, context.get_blobdir())-        .await?;--    builder.finish().await?;-    Ok(())-}--async fn export_backup_old(context: &Context, dir: impl AsRef<Path>) -> Result<()> {-    // get a fine backup file name (the name includes the date so that multiple backup instances are possible)-    // FIXME: we should write to a temporary file first and rename it on success. this would guarantee the backup is complete.-    // let dest_path_filename = dc_get_next_backup_file(context, dir, res);-    let now = time();-    let dest_path_filename = get_next_backup_path_old(dir, now).await?;-    let dest_path_string = dest_path_filename.to_string_lossy().to_string();--    sql::housekeeping(context).await;--    context.sql.execute("VACUUM;", paramsv![]).await.ok();--    // we close the database during the copy of the dbfile-    context.sql.close().await;-    info!(-        context,-        "Backup '{}' to '{}'.",-        context.get_dbfile().display(),-        dest_path_filename.display(),-    );-    let copied = dc_copy_file(context, context.get_dbfile(), &dest_path_filename).await;-    context-        .sql-        .open(&context, &context.get_dbfile(), false)-        .await?;--    if !copied {-        bail!(-            "could not copy file from '{}' to '{}'",-            context.get_dbfile().display(),-            dest_path_string-        );-    }-    let dest_sql = Sql::new();-    dest_sql-        .open(context, &dest_path_filename, false)-        .await-        .with_context(|| format!("could not open exported database {}", dest_path_string))?;--    let res = match add_files_to_export(context, &dest_sql).await {-        Err(err) => {-            dc_delete_file(context, &dest_path_filename).await;-            error!(context, "backup failed: {}", err);-            Err(err)+    let read_dir: Vec<_> = fs::read_dir(context.get_blobdir()).await?.collect().await;+    let count = read_dir.len();+    let mut written_files = 0;++    for entry in read_dir.into_iter() {+        let entry = entry?;+        let name = entry.file_name();+        if !entry.file_type().await?.is_file() {+            warn!(+                context,+                "Export: Found dir entry {} that is not a file, ignoring",+                name.to_string_lossy()

Done.

Hocuri

comment created time in 14 hours

Pull request review commentdeltachat/deltachat-core-rust

Re-enable Export to the new backup format, add backup progress, add a test for the backup progress

 def ac_outgoing_message(self, message):         m = message_queue.get()         assert m == msg_in -    def test_import_export_online_all(self, acfactory, tmpdir, lp):+    def test_import_export_online_all(self, acfactory, tmpdir, data, lp):         ac1 = acfactory.get_one_online_account()          lp.sec("create some chat content")-        contact1 = ac1.create_contact("some1@example.org", name="some1")-        contact1.create_chat().send_text("msg1")+        chat1 = ac1.create_contact("some1@example.org", name="some1").create_chat()+        chat1.send_text("msg1")         assert len(ac1.get_contacts(query="some1")) == 1++        original_image_path = data.get_path("d.png")+        chat1.send_image(original_image_path)+         backupdir = tmpdir.mkdir("backup")          lp.sec("export all to {}".format(backupdir))-        path = ac1.export_all(backupdir.strpath)-        assert os.path.exists(path)+        with ac1.temp_plugin(ImexTracker()) as imex_tracker:+            path = ac1.export_all(backupdir.strpath)+            assert os.path.exists(path)++            # check progress events for export+            assert imex_tracker.wait_progress(1, progress_upper_limit=249)

In this line we test that some number between 1 and 249 is emitted at some point (just that the logic for this is in wait_progress()). That's how fine-grained this test can be because only 4 progresses are emitted.

That's because only after successfully saving a file a progress is emitted, so to get more progress events we would need to send more files.

Hocuri

comment created time in 14 hours

push eventdeltachat/deltachat-core-rust

Hocuri

commit sha 30f20a2133930bd55045fbdf008e61ec51b91595

fixing things again

view details

push time in 16 hours

push eventdeltachat/deltachat-core-rust

Hocuri

commit sha fcd6ff64f3e69225c5d776f616b6c4df5caadb6c

fixing things again

view details

push time in 16 hours

startedcknd/stackprinter

started time in a day

push eventdeltachat/deltachat-core-rust

B. Petersen

commit sha 4f836950bc7efd95370ea8612a7a820b4cb07998

split off image recoding to separate function

view details

B. Petersen

commit sha 7c15e4e9485fd602cbbd16e8667c8ceb4f0d8748

fix rotation for small images and avatars

view details

B. Petersen

commit sha 2720d345945c8027b54b871c3ea80ca05f8d4b3f

use split off image recoding function for avatars

view details

B. Petersen

commit sha 75d79dc79ca254923bfb910a136a26be81bf826a

use different avatar-sizes for different media_quality settings

view details

B. Petersen

commit sha 4ef2a7c8d746dbd0d6fa00d81c24a1f117a5e6b8

prefer exhaustive 'match' over 'if'

view details

push time in 2 days

delete branch deltachat/deltachat-core-rust

delete branch : tweak-image-scaling

delete time in 2 days

PR merged deltachat/deltachat-core-rust

scale avatars based on media_quality, fix avatar rotation

this pr changes the avatar-pixel-size from unconditional 192x192 to 128x128 for low-media-quality and 256x256 for high-media-quality.

  • both sizes are fine to display the avatar in the chatlist or in the chat

  • since avatars were introduced one year ago, enlarging avatars was added on all platforms. however, the 192x192 was a bit poor for viewing details, this should be much better with 256x256 now as the number of pixels are rougly doubled

  • otoh, for regions with poor traffic, avatar bytes-size should be rougly halved if low media-quality is chosen

moreover, this pr fixes bugs wrt rotation:

  • exif-rotation-information for avatars were ignored before, that led to accidentally rotated avatars sometimes. this is fixed, normal-image-recoding and avatar-recoding share the same code now.

  • exif-rotation-information for normal images were accidentally ignored before when scaling was not needed. this is also fixed.

closes #2062

+59 -49

1 comment

4 changed files

r10s

pr closed time in 2 days

issue closeddeltachat/deltachat-core-rust

Use smaller avatar size when low media quality is enabled

#1883 is not going to be implemented soon, so a fast way to reduce traffic consumption is to use smaller avatar size: 64x64 or 128x128 instead of default 192x192.

Related feature request on the forum: https://support.delta.chat/t/high-quality-avatar/1052

closed time in 2 days

link2xt

Pull request review commentdeltachat/deltachat-core-rust

scale avatars based on media_quality, fix avatar rotation

 impl<'a> BlobObject<'a> {         true     } -    pub fn recode_to_avatar_size(&self, context: &Context) -> Result<(), BlobError> {+    pub async fn recode_to_avatar_size(&self, context: &Context) -> Result<(), BlobError> {         let blob_abs = self.to_abs_path();-        let img = image::open(&blob_abs).map_err(|err| BlobError::RecodeFailure {-            blobdir: context.get_blobdir().to_path_buf(),-            blobname: blob_abs.to_str().unwrap_or_default().to_string(),-            cause: err,-        })?; -        if img.width() <= AVATAR_SIZE && img.height() <= AVATAR_SIZE {-            return Ok(());-        }--        let img = img.thumbnail(AVATAR_SIZE, AVATAR_SIZE);--        img.save(&blob_abs).map_err(|err| BlobError::WriteFailure {-            blobdir: context.get_blobdir().to_path_buf(),-            blobname: blob_abs.to_str().unwrap_or_default().to_string(),-            cause: err.into(),-        })?;+        let img_wh = if MediaQuality::from_i32(context.get_config_int(Config::MediaQuality).await)+            .unwrap_or_default()+            == MediaQuality::Balanced

yip, thats better, i added a commit (and also rebased)

r10s

comment created time in 2 days

push eventdeltachat/deltachat-core-rust

B. Petersen

commit sha e69cbd06ac88d4bc8e36092558cb3f93289766dc

prefer exhaustive 'match' over 'if'

view details

push time in 2 days

push eventdeltachat/deltachat-core-rust

Alexander Krotov

commit sha 46c544a5cae7d70ca627f1eb02c00edf76fc2c6e

deltachat-ffi: forbid quoting messages from another context

view details

Alexander Krotov

commit sha bf82dd9c6047d0b3d48d9f9d7e2833fce8dfbf60

Document account_id parameter for dc_accounts_remove_account

view details

Alexander Krotov

commit sha 66907c17d3539dc13acf8ac175bd3a26651b28e9

mimeparser: preserve quotes in messages with attachments

view details

Alexander Krotov

commit sha deb506cb5298b33c42fe91e4dbd098dababf329d

Add timestamps to images and videos It is already done for voice messages and makes saving attachments to one folder easier.

view details

B. Petersen

commit sha 332f32e799a61e43d47f25f562b05a9b544b4bec

update changelog for 1.49

view details

B. Petersen

commit sha 6345e5772002fd139d50a62b38d0ea1ea1ef749b

bump version to 1.49

view details

bjoern

commit sha 1f4403d1496b284877dd86530730d68436458854

Merge pull request #2071 from deltachat/prep-1.49 prep 1.49

view details

B. Petersen

commit sha bd856d90db2ced8bb3b6545427c4698b58ecd661

remove unused AccountConfig::name the field was never set or read. to get the name for an account, use `dc_get_config(account, "displayname")`.

view details

Alexander Krotov

commit sha 9df88745dc30f7f462fd8c3fce5d8a2379fde466

smtp: do not use STARTTLS when PLAIN connection is requested Also do not allow downgrade if STARTTLS is not available

view details

link2xt

commit sha 07768133d541ab9259b4a3e1664735e15dbd1619

Merge pull request #2083 from deltachat/smtp-plain smtp: do not use STARTTLS when PLAIN connection is requested

view details

holger krekel

commit sha 2fbef80df8e4d1259e4b9883863f68e7d3650770

detailing/rewriting the group-sync draft a little.

view details

holger krekel

commit sha 8d2a5cd2428504ef44002463affba357de2540b7

fix link

view details

holger krekel

commit sha d66174e55a53921cbaf38962956bfe7dbaec3ce7

let core carry a 1.50.0-alpha.0 version (which sits between 1.49.X and 1.50.0). This way we can better distinguish tagged from untagged core releases, also in logs etc. We might, from time to time, increase the alpha.N "N" number if we are entering testing etc.

view details

Hocuri

commit sha c8d4eee79444275bafe6c6f6ffeaff70f33f3afd

Don't fetch from INBOX if it is disabled Before, if there were more than 20 jobs at once, we unconditionally fetched from inbox

view details

Hocuri

commit sha 3a7bd8b49ddafb3e992581c2f6055ed8dd6b78f6

Don't fetch messages that arrived in the meantime if InboxWatch is disabled and re-enabled That's another narrow-stitching patch for a scenario where many emails could be deleted all at once and surprisingly: user disables inbox-watch, enables delete-from-server, after a moth enables inbox-watch again -> currently all emails that arrived in the meantime will be deleted (if emails are not fetched, they won't be deleted)

view details

Hocuri

commit sha 38ed94367cf22522232fe1b1ade7512bc1594c69

Update src/config.rs Co-authored-by: bjoern <r10s@b44t.com>

view details

bjoern

commit sha 4fa667d834b521f01e3815793c7d0948df2aa5d3

Merge pull request #2084 from deltachat/alphaversions let core carry a 1.50.0-alpha.0 version

view details

bjoern

commit sha 4d2a39febb765035c922fa942c6f6e0024078b60

Merge pull request #2087 from deltachat/imbox-dont-fetch-old-msgs Don't fetch messages that arrived in the meantime if InboxWatch is disabled and re-enabled

view details

B. Petersen

commit sha 92175b27ab641e7a4ae9d7424223dd1d61a948cf

update changelog for 1.50

view details

B. Petersen

commit sha b21508fdb73979a95d1f9fc5ac0fdca75c3c4ffa

bump version to 1.50

view details

push time in 2 days

pull request commentdeltachat/deltachat-core-rust

[WIP] Upload and download large attachements through HTTP

haven't looked deeply into it yet but there is an XMPP "http-file-upload" standard https://xmpp.org/extensions/xep-0363.html to which @inputmice contributed. sidenote: He is also involved with "lttr.rs" an android app using JMAP for e-mail protocol. Unfortunately the ".rs" does not mean it involves Rust :)

Frando

comment created time in 2 days

push eventdeltachat/deltachat-core-rust

Hocuri

commit sha fdedf6b26b4a23dd4ba95f5cc39ad2b98c4c70ef

DB migration (untested)

view details

Hocuri

commit sha 2958709c5430155713deec00adadcaf9eb2bba43

Store uid_next in SQL instead of lastseen in a config

view details

Hocuri

commit sha e2e1b8066d50a2092aff1a7871192c640d686c61

Revert "If Inbox-watch is disabled and enabled again, do not fetch emails from in between" all folders are always watched, anyway

view details

Hocuri

commit sha dd3e1388ad460d7a0a5e86754733612c490c78bb

clippy, rm debug logs, comments

view details

Hocuri

commit sha 258a356084e80aaefb2bba77581aae72124139b4

Codestyle, comments

view details

push time in 2 days

pull request commentdeltachat/deltachat-core-rust

scale avatars based on media_quality, fix avatar rotation

Related feature request on the forum: https://support.delta.chat/t/high-quality-avatar/1052

r10s

comment created time in 2 days

Pull request review commentdeltachat/deltachat-core-rust

scale avatars based on media_quality, fix avatar rotation

 impl<'a> BlobObject<'a> {         true     } -    pub fn recode_to_avatar_size(&self, context: &Context) -> Result<(), BlobError> {+    pub async fn recode_to_avatar_size(&self, context: &Context) -> Result<(), BlobError> {         let blob_abs = self.to_abs_path();-        let img = image::open(&blob_abs).map_err(|err| BlobError::RecodeFailure {-            blobdir: context.get_blobdir().to_path_buf(),-            blobname: blob_abs.to_str().unwrap_or_default().to_string(),-            cause: err,-        })?; -        if img.width() <= AVATAR_SIZE && img.height() <= AVATAR_SIZE {-            return Ok(());-        }--        let img = img.thumbnail(AVATAR_SIZE, AVATAR_SIZE);--        img.save(&blob_abs).map_err(|err| BlobError::WriteFailure {-            blobdir: context.get_blobdir().to_path_buf(),-            blobname: blob_abs.to_str().unwrap_or_default().to_string(),-            cause: err.into(),-        })?;+        let img_wh = if MediaQuality::from_i32(context.get_config_int(Config::MediaQuality).await)+            .unwrap_or_default()+            == MediaQuality::Balanced

It is better to use an exhaustive match here rather than assuming that the only other option is "worse quality". Then if more options are added to MediaQuality, we will get an error that some case is not handled.

r10s

comment created time in 2 days

Pull request review commentdeltachat/deltachat-core-rust

Re-enable Export to the new backup format, add backup progress, add a test for the backup progress

 def ac_outgoing_message(self, message):         m = message_queue.get()         assert m == msg_in -    def test_import_export_online_all(self, acfactory, tmpdir, lp):+    def test_import_export_online_all(self, acfactory, tmpdir, data, lp):         ac1 = acfactory.get_one_online_account()          lp.sec("create some chat content")-        contact1 = ac1.create_contact("some1@example.org", name="some1")-        contact1.create_chat().send_text("msg1")+        chat1 = ac1.create_contact("some1@example.org", name="some1").create_chat()+        chat1.send_text("msg1")         assert len(ac1.get_contacts(query="some1")) == 1++        original_image_path = data.get_path("d.png")+        chat1.send_image(original_image_path)+         backupdir = tmpdir.mkdir("backup")          lp.sec("export all to {}".format(backupdir))-        path = ac1.export_all(backupdir.strpath)-        assert os.path.exists(path)+        with ac1.temp_plugin(ImexTracker()) as imex_tracker:+            path = ac1.export_all(backupdir.strpath)+            assert os.path.exists(path)++            # check progress events for export+            assert imex_tracker.wait_progress(1, progress_upper_limit=249)

This assert only checks that returned value is not 0 or None. Maybe check that it actually returns 1 (and not 2 or anything like that)?

Hocuri

comment created time in 2 days

Pull request review commentdeltachat/deltachat-core-rust

Re-enable Export to the new backup format, add backup progress, add a test for the backup progress

 async fn export_backup_inner(context: &Context, temp_path: &PathBuf) -> Result<(         .append_path_with_name(context.get_dbfile(), DBFILE_BACKUP_NAME)         .await?; -    context.emit_event(EventType::ImexProgress(500));--    builder-        .append_dir_all(BLOBS_BACKUP_NAME, context.get_blobdir())-        .await?;--    builder.finish().await?;-    Ok(())-}--async fn export_backup_old(context: &Context, dir: impl AsRef<Path>) -> Result<()> {-    // get a fine backup file name (the name includes the date so that multiple backup instances are possible)-    // FIXME: we should write to a temporary file first and rename it on success. this would guarantee the backup is complete.-    // let dest_path_filename = dc_get_next_backup_file(context, dir, res);-    let now = time();-    let dest_path_filename = get_next_backup_path_old(dir, now).await?;-    let dest_path_string = dest_path_filename.to_string_lossy().to_string();--    sql::housekeeping(context).await;--    context.sql.execute("VACUUM;", paramsv![]).await.ok();--    // we close the database during the copy of the dbfile-    context.sql.close().await;-    info!(-        context,-        "Backup '{}' to '{}'.",-        context.get_dbfile().display(),-        dest_path_filename.display(),-    );-    let copied = dc_copy_file(context, context.get_dbfile(), &dest_path_filename).await;-    context-        .sql-        .open(&context, &context.get_dbfile(), false)-        .await?;--    if !copied {-        bail!(-            "could not copy file from '{}' to '{}'",-            context.get_dbfile().display(),-            dest_path_string-        );-    }-    let dest_sql = Sql::new();-    dest_sql-        .open(context, &dest_path_filename, false)-        .await-        .with_context(|| format!("could not open exported database {}", dest_path_string))?;--    let res = match add_files_to_export(context, &dest_sql).await {-        Err(err) => {-            dc_delete_file(context, &dest_path_filename).await;-            error!(context, "backup failed: {}", err);-            Err(err)+    let read_dir: Vec<_> = fs::read_dir(context.get_blobdir()).await?.collect().await;+    let count = read_dir.len();+    let mut written_files = 0;++    for entry in read_dir.into_iter() {+        let entry = entry?;+        let name = entry.file_name();+        if !entry.file_type().await?.is_file() {+            warn!(+                context,+                "Export: Found dir entry {} that is not a file, ignoring",+                name.to_string_lossy()

I don't know what's the difference exactly, but display method seems better suited for this usecase.

Hocuri

comment created time in 2 days

startedbiscuitehh/pam-watchid

started time in 2 days

startedakirakyle/emacs-webkit

started time in 3 days

startedtalentlessguy/tinyhttp

started time in 3 days

pull request commentdeltachat/deltachat-core-rust

[WIP] Improve sql interface

Note that job_try! will fail the job. Maybe we need a job_try_warn! or something like this in some cases, to write a warning into the log and return job::Status::RetryNow or job::Status::RetryLater.

dignifiedquire

comment created time in 4 days

issue commentdeltachat/deltachat-core-rust

Trailing new lines aren't removed from incoming classic email

iirc recent chat in testing group, the original mail is an html mail. if so, this issue is about cleaning empty lines that may be produced after html-to-text conversion.

adbenitez

comment created time in 4 days

more