Post changes from VirtualTable to SQL Server

Discussion of open issues, suggestions and bugs regarding SDAC (SQL Server Data Access Components) for Delphi, C++Builder, Lazarus (and FPC)
Post Reply
johncmolyneux
Posts: 15
Joined: Fri 26 Jun 2009 10:06

Post changes from VirtualTable to SQL Server

Post by johncmolyneux » Fri 26 Jun 2009 10:32

Hi.

I want to perform a quite straight-forward task, but can find no help in doing so.

I'm using an MSQuery to get data from a table, and assigning the data to a VirtualTable (it needs to do this to mimic the complete functionality later). I'm then editing the VirtualTable via DB aware controls, and I want to post the changes back to the MSQuery (and therefore the DB).

How do I do this final part?

Thanks in advance,

John.

Ludek
Posts: 296
Joined: Thu 12 Oct 2006 09:34

Post by Ludek » Fri 26 Jun 2009 11:13

I would also appreciate some simple solution to this issue. I would use it to solve my old problem described in
http://devart.com/forums/viewtopic.php?t=13289

johncmolyneux
Posts: 15
Joined: Fri 26 Jun 2009 10:06

Post by johncmolyneux » Fri 26 Jun 2009 11:24

After reading the linked post from Ludek, I decided to give a bit more background on my current problem, so as to explain my requirements.

I have a server/client application setup, where the server has DB access, and the client does not.

The client can request data (with an SQL query) and the server creates a VirtualTable and sends it via TCP to the client.

The client then displays the data and allows the user to modify it.

If the data was modified, it is sent back to the server via TCP, and recreated as a VirtualTable there.

The final step is to take the VirtualTable and apply the updates to the original MSQuery so that it can post the changes to the database.

I was originally doing this using ADO and a KbmMemTable. I found SDAC a lot faster than ADO, but the VirtualTable only slightly faster than the KbmMemTable. At least with the KbmMemTable I could apply the dataset back to a dataset object and then post the changes. I cannot find any way to do this with the VirtualTable, hence asking this question.

Hope this makes sense, but please ask if you need clarification. I look forward to your reply.

Thanks,

John.

johncmolyneux
Posts: 15
Joined: Fri 26 Jun 2009 10:06

Post by johncmolyneux » Fri 26 Jun 2009 11:43

Okay, so this turned out to be so easy that I feel a little silly now (not really :twisted: )

I did the following...


MSQuery1.Assign(VirtualTable1);
MSQuery1.Edit;
MSQuery1.Post;


MSQuery1 was the original dataset that I retrieved from the database, but it could be a reconstructed version, made by simply calling the same query again.

Ludek - hope this helps you out.

John.

Ludek
Posts: 296
Joined: Thu 12 Oct 2006 09:34

Post by Ludek » Fri 26 Jun 2009 12:26

hmmmm, now i see, our problems are different. i just would like a dataset class, that does not necessarily read its data from database, but can write the user made changes to database.

even the cachedupdates property and onupdaterecord event in the tvirtualtable class could help me much (+ ability to fill the tvirtualtable from tmsquery, but as i see now, this is probably already possible)

johncmolyneux
Posts: 15
Joined: Fri 26 Jun 2009 10:06

Post by johncmolyneux » Fri 26 Jun 2009 13:20

I think what you're asking for is a definite no-go. If the table you wanted to eventually post to had ID fields (for example), how would they be handled in a generic manner?

It's quiet a strange thing you're asking for.

The only suggestion I would have, other than "where 1 = 0" which obviously parses every row of data, would be to do a "select top 0 * from table". At least that gets an empty dataset which can then be posted later, without the overhead of getting a load of data which is ignored.

Ludek
Posts: 296
Joined: Thu 12 Oct 2006 09:34

Post by Ludek » Fri 26 Jun 2009 14:17

why strange? imagine one master, 20 details, slow connection over vpn (one roundtrip ca 100 ms), and a call masterdataset.append. that means 20x opening a query = at least 2 seconds only to deliver no data. (local master detail is not possible, as the details table has millions of rows).

i don't use ms sql server generated identity or any other specialities, no problem can appear.

johncmolyneux
Posts: 15
Joined: Fri 26 Jun 2009 10:06

Post by johncmolyneux » Mon 29 Jun 2009 07:06

I must be misunderstanding your problem, because I'm not seeing a reason why you would open 20 queries to the master table, in order to append 1 master row and 20 detail rows.

Could you not create an insert statement that handles all the inserts in one go?

Ludek
Posts: 296
Joined: Thu 12 Oct 2006 09:34

Post by Ludek » Mon 29 Jun 2009 12:37

user in 95 % makes master.append using a button and then fills the 20 detail datasets (i.e. grids) in cached updates mode. than he saves everything in one transaction.
in 5% he calls master.edit and then edits everywhere data (in master, in clients, everywhere).
i want to save the 20 query openings in the 95% and still want to have the 5% of editing possible. (imagine writing invoices: you nearly always make new invoices(=master.append), with positions(=client.append), serials (...), payment options etc. an just very occassionally want to edit older ones (=master.edit, client.edit, client.append, etc.))

I don't need to send query.open to the client datasets in the master.append mode, as I have persistent fields and know, that i won't get any rows.

Ludek
Posts: 296
Joined: Thu 12 Oct 2006 09:34

Post by Ludek » Mon 29 Jun 2009 12:42

I mean 20 detail datasets, not just 20 rows. each dataset has different fields from different tables, shown in different grids (well, 20 is slightly overestimated, but 10 could be a realistic number)
and the 10 queries are not simple, there are sometimes many (10+) joins, some joins even on (unindexable) views, so even with "top 0" and in lan is their execution sometimes slow.

johncmolyneux
Posts: 15
Joined: Fri 26 Jun 2009 10:06

Post by johncmolyneux » Mon 29 Jun 2009 13:40

I think that realistically, with a design like that (and I use the word design generously!), you're gonna have to bite the bullet and either redesign it in a more friendly manner, or you're going to have to create the empty datasets client-side, by adding the field defs either at design-time, or by code at run-time.

I don't think that these components are lacking anything that could help you with this. I don't think anything other than a redesign will help you with this!

Ludek
Posts: 296
Joined: Thu 12 Oct 2006 09:34

Post by Ludek » Mon 29 Jun 2009 15:11

well, if you tell ALL our customers, that they can't next time do all the things they were used to, i would be really happy, believe me :)

sorry, but i CAN see here a solution posibility on sdac side. And i know, i am able to redesign everything, without tmsquery at all, just everything on-the-knee. i just don't have time for it... :( and we are in 2009, not in 1990 :)

i can also tell you, that copying data between tvirtualtable and tmsquery is silly, that you have to redesign it and avoid it :)

please, lock this thread, it is all pointless...

Post Reply