*** zbenjamin is now known as Guest45281 | 01:11 | |
*** zbenjamin_ is now known as zbenjamin | 01:11 | |
*** frinring_ is now known as frinring | 01:35 | |
dcaliste | Hello chriadam, I'm sorry, I'll be 15 minutes late this morning. I still have one or two things to do. Is it alright for you or will it be too late ? | 06:49 |
---|---|---|
chriadam | hi dcaliste, no problem at all, no rush | 06:49 |
dcaliste | Ok, thanks, see you in 30 minutes then. | 06:49 |
*** svartoyg is now known as svartoyg_afk | 07:23 | |
*** Ischwitch is now known as Ingvix | 07:24 | |
dcaliste | Hello again chriadam, sorry for the delay today. | 07:30 |
chriadam | hi dcaliste, no problem at all | 07:31 |
chriadam | I hope you had a good day last Tuesday with your family | 07:31 |
dcaliste | Yes indeed, thank you. | 07:31 |
dcaliste | Besides the MRs that we already discussed last time, I've added this one https://git.sailfishos.org/mer-core/buteo-sync-plugin-caldav/merge_requests/71 for CalDAV. | 07:32 |
chriadam | let me check | 07:32 |
dcaliste | It's mainly code simplification by using the native kcalcore copy function. | 07:33 |
dcaliste | I'm using it daily for 10 days now without particular issue. Which doesn't mean that it's bug free ! | 07:33 |
chriadam | I mean, I really like the look of all of that red/removed code! I will have to digest precisely what changes are involved | 07:34 |
dcaliste | Sure, I try to track all differences between what is copied in the removed CalDAV method, and what is copied in each class in KCalCore. | 07:35 |
dcaliste | The only noticeable difference is that the kcalcore one is copying the UID. | 07:35 |
dcaliste | That's why you have some setUid() changes, to put them before the copy. | 07:36 |
dcaliste | That being said, there are so many members to be copied distributed over all the classes involved that I may hae missed some… | 07:36 |
chriadam | is it that the kcalcore code changed sometime since we first wrote it, so that what we did is no longer required, or did we just unintentionally duplicate kcalcore code originally? | 07:37 |
dcaliste | I think the copy func exists in KCalCore since a long time. It's a bit tricky to use because of this polymorphism implying to up cast to incidence base, but it's documented. | 07:38 |
dcaliste | I don't know, if you intentionaly implement a copy yourself to avoid something in the kcalcore one, or you overlooked it by accident. | 07:39 |
dcaliste | I think, it may be due to the UID things, because upstream UID and database UIDs are not the same. | 07:41 |
chriadam | I wonder if it also filters out some X- properties | 07:41 |
dcaliste | That's why, in the patch, I scratch the upstream UID with the database one before doing the copy. | 07:42 |
chriadam | as we used to use some of those for certain non-persistent data IIRC | 07:42 |
dcaliste | For the properties, I think they are all copied. In incidencebase.cpp#152, it is calling the operator= of the custom properties. | 07:43 |
dcaliste | And this operator= is doing a copy of the private part. | 07:43 |
dcaliste | See cutomproperties.cpp#101 | 07:43 |
chriadam | I mean, I wonder if our custom implementation did something different in some cases | 07:43 |
dcaliste | Ah, ok. Miss the point ;) | 07:44 |
chriadam | to avoid copying some special X- properties, or comments, if they were not meant to be persisted or something | 07:44 |
chriadam | not sure, just wondering why we did it | 07:44 |
chriadam | are there other places where we have similar incidence handler / copier? do we have something like that in google calendar, or in one of the tools in the carddav repo, I wonder? | 07:45 |
dcaliste | There is one other place in the icalconverter in nemo-qml-plugin-calendar. | 07:45 |
chriadam | ah true. what is that one used for? sharing event as .ics or something? | 07:46 |
dcaliste | To convert to or import from ics data. It is mainly duplicated code from the caldav plugin. | 07:47 |
chriadam | true | 07:47 |
chriadam | might be nice to get rid of that at the same time, but can also happen later too. | 07:48 |
dcaliste | That's also partly why I try to simply this process at maximum, to rely as much as possible on kcalcore, and try not to patch deficencies of mkcal of kcalcore here and there. | 07:48 |
chriadam | thanks for looking into this one - the less custom code we have the better | 07:48 |
chriadam | yeah, definitely the right approach. | 07:48 |
dcaliste | That being said, I discovered this commit upstream : https://invent.kde.org/frameworks/kcalendarcore/-/commit/52ce13387bfd15e25333738c080fc844eff14c4b | 07:50 |
dcaliste | And created a MR for our kcalcore : https://git.sailfishos.org/mer-core/kcalcore/merge_requests/19 | 07:50 |
dcaliste | It's to use properties that are for local use only and that are not serialise to ICS data by kcalcore. | 07:50 |
chriadam | ah, I saw an email about that one | 07:50 |
dcaliste | Like that we could save a lot of boiling plate code in caldav and friends storing etags in comments, and removing them before upstreaming… | 07:51 |
chriadam | indeed! | 07:51 |
chriadam | that seems like a very good idea | 07:51 |
dcaliste | The patch is simple and applies well. Just have to check with mkCal, because they are noted as X-KDE- properties and mkCal is saving with setNonKDECustomProperties(). | 07:52 |
dcaliste | That's why I added on top a little commit to store them properly even when called from setNonKDECustomProperties(). | 07:53 |
chriadam | fyi: pvuorela is on vacation for the next 3 weeks, so there's just me and flypig to do reviews currently (and as branching as just occurred, we're in the middle of reactively providing fixes for RC issues / etc, so not sure how much reviewing we'll get done in the next week or so unfortunately) | 07:53 |
dcaliste | No problem, it's not in a hurry anyway, since the window for the next version after 3.4.0 will be open for some time. | 07:54 |
chriadam | yep. still, poke me if we let things go unreviewed for too long! | 07:55 |
dcaliste | There is one thing about the volatile properties that I would like to discuss with you a bit more. It's more about code design. | 07:55 |
chriadam | definitely | 07:55 |
dcaliste | I would like to add support for error reporting for sync process. | 07:55 |
dcaliste | After thinking about it for a (long) while, and with the arrival of these volatile properties, I arrive to the conclusion to use them as vectors to notify the UI layer. | 07:56 |
dcaliste | The idea is for sync plugins to flag events with sync troubles with a predefined volatile property. | 07:57 |
dcaliste | Then the QML binding could expose this property in a proper way (meaning a QML property with an enum) and UI could display the errors. | 07:58 |
chriadam | my initial reaction is "I don't think this is the right way" but maybe I'm misunderstanding | 07:58 |
dcaliste | You can see it in action with screenshots in jolla-calendar!266 | 07:58 |
dcaliste | That's why I would like to discuss about it. | 07:59 |
chriadam | let me check | 07:59 |
dcaliste | The idea to notify per event, is to warn the user that information displayed for this event may not be the right one with respect to server version. | 08:00 |
chriadam | ok, I think I'm understanding a bit more, now | 08:01 |
chriadam | 1) this isn't related to Buteo at all (i.e. not a way to report to sync ui or similar), it's sync-middleware-agnostic really | 08:02 |
chriadam | 2) it's just a way for the plugin to specify, on a per-event basis, whether a sync issue occurred | 08:02 |
chriadam | 3) then the UI can display appropriate message for the user when they are viewing events | 08:02 |
chriadam | that makes more sense. however, does that mean that this "sync failed" flag/property needs to be persistent (but not exported for sync)? | 08:03 |
dcaliste | That's exact. And it's not adding to much boiler plate code thanks to the volatile properties. Don't need to remove them on exportation, they are saved in local database so persist on device… | 08:03 |
dcaliste | I mean all of these without additional code, except setting the custom property at the right place, and remove it when the sync success later on. | 08:04 |
dcaliste | You can see how I'm setting these properties in https://git.sailfishos.org/dcaliste/buteo-sync-plugin-caldav/commit/ad1e6fca34d0095643baedc16d94626bcc357450 | 08:05 |
dcaliste | One needs to implement the flag setting in google sync (I can do if the design is accepted) and in EAS of course… | 08:07 |
chriadam | my brain isn't working, I'm trying to parse that commit | 08:09 |
chriadam | is this setting the flag on every incidence in the notebook? | 08:09 |
dcaliste | No, just the failing ones. | 08:10 |
chriadam | if a sync failure is encountered, don't we roll back any changes to incidences in the notebook? | 08:10 |
dcaliste | The success state is stored without the property. | 08:10 |
chriadam | so there are two cases: 1) incidence may differ from server side, due to unrelated sync failure. 2) incidence differs from server side, and caused sync failure | 08:11 |
chriadam | or am I misunderstanding still | 08:11 |
dcaliste | Ah, Yes, this is something I didn't discuss, because I wanted to separate the "tag events" from the "handle failures in caldav" parts. But indeed they are a bit related. | 08:11 |
dcaliste | At the moment this is the status for caldav: | 08:11 |
dcaliste | - local modifications are pushed to server first one by one (I mean all at once but with separated requests); | 08:12 |
dcaliste | - then remote modifications are applied locally. | 08:12 |
dcaliste | Suppose an error appears in the second part, then there is no roll back for the local modifications, they have been sent anyway. | 08:13 |
chriadam | right | 08:13 |
dcaliste | Worst, suppose one of the upstreaming fails, all the previous ones are kept but all the later one are not applied. | 08:13 |
chriadam | indeed :-/ | 08:14 |
dcaliste | Indeed, if any of the remote change is failing, all the remote changes are not propagated on device. | 08:14 |
dcaliste | This is creating user frustration, because one event is failing and then all other server modifications are not applied, even if valid. The net result is that the device database is in a worst situation with resopect to upstream than not applying all valid changes. | 08:16 |
dcaliste | I'm working in a branch at the moment (and testing for one week not with success), where any error is stored but the sync continue for every valid things that are not related to the error. | 08:17 |
dcaliste | I mean _with_ success ;) | 08:17 |
chriadam | makes sense | 08:17 |
chriadam | just to make sure I understand: | 08:17 |
dcaliste | The idea is that etags are given per URL, so we can apply every thing that is valid both direction and just let the URL with failures on the side. | 08:18 |
chriadam | right, I agree in principle - that sounds great. and for the "upsync caused issue, flag that local event as possible cause, but then continue" case, that seems simple | 08:18 |
chriadam | but for the "now apply remote modifications" case where one of the events fails to save or something -> how can that one be handled? | 08:19 |
chriadam | do we then have a transaction per event? | 08:19 |
chriadam | instead of per-notebook? | 08:19 |
chriadam | (that seems... suboptimal to me..) | 08:19 |
dcaliste | Indeed, it's per notebook. | 08:19 |
dcaliste | Good point, at the moment, I'm not flagging when the transaction fails… | 08:20 |
dcaliste | I'm tagging when the in memory fails. I mean tworking in the mCalendar. | 08:20 |
dcaliste | I forgot about the mStorage->save() failure case… | 08:21 |
chriadam | it might not be necessary, I mean, even handling the upsync case is a big improvement | 08:21 |
dcaliste | I'm flagging the dissociation failures, or the addition failures (for whatever reason, but mainly invalid UID). | 08:21 |
chriadam | in those failed cases, what do you do? store a list of "broken" events in memory, roll back the transaction, start a new transaction, write the "broken" property to those events and commit the transaction? | 08:22 |
dcaliste | And flagging the delete failure also, like when the event is not in calendar because of programmatic error, or whatever. | 08:22 |
dcaliste | Indeed, for that cases, I'm flagging the offending events locally so user can be warned that data are not synced with upstream, and he should give a look on server for valid rendez-vous time. | 08:23 |
dcaliste | Then, nothing is done for these events in the database, I mean no modifications. | 08:24 |
dcaliste | They are kept as before the failing action. | 08:24 |
dcaliste | But the rest of the events are queued for modifications and mofications are applied (if mStorage->save() succeed) nonetheless. | 08:25 |
chriadam | sounds good, I just wonder what granularity it is. i.e. if the "failure" results in a transaction rollback, then won't it roll back every change to every event modified on the server in that notebook since the last sync? | 08:25 |
chriadam | ah, the rest of the events / changes are queued for write afterwards | 08:25 |
chriadam | I see... so a second transaction, containing: successes from previous transaction, plus flags for the failure ones. | 08:26 |
chriadam | makes sense. | 08:26 |
dcaliste | Exact. | 08:26 |
dcaliste | This is my branch with this : https://git.sailfishos.org/dcaliste/buteo-sync-plugin-caldav/tree/failures | 08:27 |
dcaliste | There are many commits, because it was a huge rework of notebooksyncagent logic. | 08:27 |
dcaliste | I tried to create as small and consistent commits as I could. | 08:27 |
chriadam | sounds like a great improvement | 08:28 |
dcaliste | The tricky part was to track the assumption of stopping everything on failure so it's safe to let this in a broken state. | 08:28 |
dcaliste | If you agree, it will be a huge review to be done. But as said before, there is no hurry. | 08:29 |
chriadam | and now that I am understanding a bit more, I think I agree with your proposed solution. there are of course edge cases (e.g. what if transient failure occurs during the attempt to commit the second transaction). but in general, definitely agree that we can't let one failure stop sync altogether, and worse, silently. | 08:29 |
dcaliste | The main benefit is that when you have several notebooks for an account, a failing one won't block the others. | 08:30 |
chriadam | yep, I can't promise to review that one in the next week or two, would like to discuss with pvuorela about it when he gets back too. | 08:30 |
dcaliste | Sure, of course, I think I even didn't create the MR yet ;) | 08:31 |
chriadam | sounds like a big change. thank you. | 08:31 |
dcaliste | Having this done, it's then natural in my opinion to display remaning errors of a sync to the user. | 08:32 |
dcaliste | The problem with flagging the local events, is that we're missing two error cases: | 08:32 |
dcaliste | - we cannot warn the user of a local deletion not propageted to the server (not a big deal); | 08:33 |
dcaliste | - we cannot warn the user of a remote addition (big issue if you're missing your new rendez-vous added by your boss on the server). | 08:33 |
dcaliste | remote addition that is failing. | 08:34 |
chriadam | could add a custom property to the notebook | 08:34 |
chriadam | and display some banner in the calendar app. but would need to discuss that UI side with joona and martin | 08:35 |
dcaliste | Of course. It's just brainstorming from the new possibilities offered by the handling of errors at sync level. | 08:36 |
chriadam | indeed | 08:36 |
chriadam | one concern I have is: | 08:36 |
dcaliste | Because there is another technical possibility that I didn't discuss yet, if you still have 10 minutes or something. | 08:36 |
chriadam | yes, no rush | 08:36 |
chriadam | go ahead | 08:36 |
dcaliste | You first, I didn't notice your sentence. | 08:37 |
dcaliste | Still typing looking at the keyboard… | 08:37 |
chriadam | my concern is that by pushing the communication channel into the datum itself (i.e. a custom property / flag of the incidence), we force this to be handled by the most data-type-aware endpoints only (i.e. the sync plugin and the calendar app). | 08:38 |
chriadam | thus, it doesn't "automatically scale" to other data types (like contacts) | 08:38 |
chriadam | or if we wanted to add a Sync UI in future, as relates to your buteo PRs etc | 08:38 |
chriadam | but, perhaps that's not a big problem: after all, contacts (and other data) probably changes relatively infrequently, and have less impact on day-to-day life than e.g. appointments / events / etc | 08:39 |
dcaliste | Exact, same thinking here. Here is my other technical point about this : | 08:39 |
chriadam | so maybe "high level" error reporting for contacts (e.g. "something went wrong") is fine, whereas per-event reporting makes sense for calendar | 08:39 |
dcaliste | - having it at event level, makes it easy (read very few code addition) to display it in UI where it matters. And very few CPU intensive also, since the information is already provided per event. | 08:41 |
dcaliste | - I thought at a broader level error reporting done at Buteo level, so every sync process can report errors. Here are the technical details: | 08:42 |
chriadam | indeed | 08:42 |
dcaliste | the XML log is currently like | 08:42 |
dcaliste | <sync at="" majorCode=""> | 08:42 |
dcaliste | <target name="ploum" added="" …/> | 08:42 |
dcaliste | </sync> | 08:42 |
dcaliste | What about extending target element with: | 08:43 |
dcaliste | <target name="ploum" added=""> | 08:43 |
dcaliste | <item uid="123456" errorCode="XX"/> | 08:43 |
dcaliste | </target> | 08:43 |
dcaliste | Like that a sync plugin can store per event / contact whatever, this failure information. | 08:44 |
chriadam | makes sense. requires a stringifiable uid, but that is probably doable in all cases we care about | 08:44 |
dcaliste | We can display in a sync UI what are the failing events… | 08:44 |
dcaliste | Then, the calendar UI, or the QML layer, can fetch from Buteo the latest state and adjust QML property accordingly for event at hand. | 08:45 |
dcaliste | But it's more code to be added. | 08:46 |
dcaliste | Need to add the complete handling of this in Buteo. And add Buteo code inside nemo-qml-plugin-calendar to expose this to UI. | 08:46 |
dcaliste | Or let UI handle Buteo itself, which will require even more code additions. | 08:47 |
chriadam | yeah. I think that reporting per item error codes in the buteo results struct makes sense going forward. but I worry how much effort it might take (and how many things would need to be changed) if the major use-case we can identify is just calendar events. | 08:47 |
dcaliste | Without speaking of CPU demand if we have to do this check for every event. | 08:47 |
chriadam | indeed | 08:47 |
dcaliste | That's why I started with the volatile properties when I discovered them ;) | 08:48 |
chriadam | makes sense | 08:48 |
dcaliste | And also why I wanted to discuss these code design possibilities with you. | 08:48 |
chriadam | certainly, from the calendar ui POV, it makes the most sense to expose the error info as an event property | 08:48 |
chriadam | from the (currently non-existent) sync UI, it makes sense to expose the error info as Buteo Results struct etc | 08:49 |
chriadam | but that is a future thing IMO. | 08:49 |
chriadam | ok, well, for now, you've convinced me that this is a sensible and achievable solution to the issue. thanks very much for the explanations and your work to implement it. | 08:50 |
chriadam | of course, pvuorela and jpetrell will want to discuss it also, when they get back from their respective vacations | 08:50 |
chriadam | was there anything else to discuss this week? I saw you updated the upstream qmf change - thanks very much for that. sorry that I didn't raise those concerns with you on the original PR (I don't know qmf too well, unfortunately) | 08:52 |
dcaliste | Sure, don't hesitate to discuss with them if you can find time later on, or schedule a meeting all together, so we can discuss technical and UI implication of various solutions. | 08:52 |
chriadam | yes, we should schedule such a meeting when they're both back, I think | 08:52 |
chriadam | ... so it wasn't until I was discussing with Matt that he mentioned that the queryAccounts() or whatever method existed | 08:52 |
dcaliste | About the QMF MR, no problem, it seems I didn't know the code much myself neither ;) | 08:53 |
dcaliste | That being said, I still prefer my solution, but I see the problem of not doing it the same for other request like fiolders… | 08:53 |
chriadam | like any Qt module, it's too big to truly know, unless you work with it on a daily basis, IMO. | 08:54 |
chriadam | yeah, its a valid approach. depends on the "contract" offered by the API I guess - i.e. interpretation of the meaning of the method | 08:54 |
dcaliste | The chosen solution is adding a SQL query in the client code, but well, SQLite is fast anyway and we're speaking of some queries, not something that is done intensively. So it's fine. | 08:55 |
chriadam | great | 08:55 |
dcaliste | Thank you for discussing these error reporting matter in length today. | 08:56 |
chriadam | no problem, thank you for your explanations and effort as always | 08:56 |
chriadam | if nothing else to discuss, I hope that you have a great week! | 08:56 |
dcaliste | I wish a great week for you too. Good night. | 08:57 |
chriadam | gnight! | 08:57 |
attah | lrelease error: Cannot open <file that exists>: No such file or directory | 09:51 |
attah | any ideas? | 09:51 |
x2s | path and everything correct? | 09:51 |
attah | path is correct from root of project | 09:51 |
attah | but maybe it should be correct at some later stage with some temporary dir? | 09:52 |
ViGe | attah: I just replied to you at the forum | 09:52 |
attah | ViGe: thanks! | 09:52 |
attah | that did it :) | 09:53 |
ViGe | attah: basically the problem in the .pro file is that TS_FI_FILE is relative to current directory (i.e. build directory) instead of project root | 09:54 |
attah | so i need to convince it to make the translations folder available in the build dir? | 09:58 |
ViGe | attah: No, you need to add $$PWD/ in the beginning of TS_FI_FILE | 10:03 |
attah | thanks, that worked | 10:05 |
Sellerie | Did someone ever port docker to sailfishos? | 11:36 |
Sellerie | I have a horrible plan and I need logical separation from the rest of the system I think | 11:36 |
Nico[m] | I don't know of anyone porting docker, but you could probably make your own lightweight containers on what is available with unshare, if you just want to try stuff out | 11:38 |
Smar | does sailfish have enough new systemd for nspawn? (if no, just regular systemd services may be more than enough) | 11:43 |
Sellerie | My idea was to make mpd work on SFOS somehow, but that probably misses a few things. Running it in docker and just connecting to it via mpd over network should do the trick | 11:43 |
Sellerie | Well an MPD package or something similar would prolly work just aswell | 11:45 |
Nico[m] | I'd just make a package for mpd | 11:46 |
Sellerie | What was the primary CLI package manager for SFOS again? | 11:47 |
Nico[m] | For advanced stuff, zypper | 11:47 |
Sellerie | I remember there being two, so zypper and another one | 11:47 |
ViGe | Sellerie: You are probably thinking of pkcon. | 11:49 |
Sellerie | Yes | 11:49 |
Aard | ~ | 11:52 |
Sellerie | Aard: ~: Permission denied | 11:56 |
Nico[m] | devel-su ~ | 11:57 |
Sellerie | Don't worry, IRC automatically censors your password | 11:57 |
Nico[m] | hunter2 | 11:57 |
ViGe | ******* | 11:58 |
Sellerie | Probably the second oldest trick in the IRC book | 11:58 |
*** svartoyg is now known as svartoyg_afk | 23:24 |
Generated by irclog2html.py 2.17.1 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!