Read API data one by one or in block

Read API data one by one or in block

Postby Marc Venken » Sun Dec 18, 2022 2:53 pm

Hello,

Reading API data from server can be done one by one (each record connects, retrieve data and proces) This is what I do now

Or should I consider to read 200 or more records and process that string of data. Here is the problem that the Hash fill will be large and much more difficult to process because of not knowing what is each record.
At least that is what I think?
Should I mind that there are many connections if I process 1 by 1 ?

How do you process the data from API readings ?
Marc Venken
Using: FWH 23.04 with Harbour
User avatar
Marc Venken
 
Posts: 1439
Joined: Tue Jun 14, 2016 7:51 am
Location: Belgium

Re: Read API data one by one or in block

Postby Antonio Linares » Mon Dec 19, 2022 9:27 am

Dear Marc,

Lets see what chatGPT has to say :-) https://chat.openai.com/chat

You can even ask it to show you some code examples :-)
It really depends on the specifics of your situation and what you are trying to achieve. Here are a few things to consider:

If you are reading data from an API one record at a time, you will likely have to make many connections to the server, which could be slower and potentially more resource-intensive than reading multiple records at once.
On the other hand, if you are reading multiple records at once, you will have to process a larger amount of data in one go, which could potentially be more complex and time-consuming.
One way to balance these trade-offs is to read a certain number of records at a time, rather than reading all of the records at once. This can allow you to process the data in smaller chunks and potentially improve the efficiency of your program.
In terms of how to process the data from API readings, it really depends on the structure of the data and what you want to do with it. Some common approaches include:

Parsing the data into a structured format, such as a list or a dictionary, to make it easier to access specific pieces of information.
Iterating over the data to perform some action on each record, such as storing it in a database or analyzing it in some way.
Using libraries or frameworks that make it easier to work with the data, such as Pandas for data analysis or requests for making HTTP requests to APIs.
Ultimately, the best approach will depend on your specific needs and goals.

Everytime you ask chatGPT you get a different answer that somehow complement what it already said:
There are a few things to consider when deciding how to retrieve data from an API:

How much data are you retrieving? If you are only retrieving a small amount of data, it may be more efficient to retrieve it one record at a time. However, if you are retrieving a large amount of data, it may be more efficient to retrieve it in larger batches.

How often do you need to retrieve the data? If you only need to retrieve the data once, it may not matter as much whether you retrieve it one record at a time or in larger batches. However, if you need to retrieve the data frequently, it may be more efficient to retrieve it in larger batches to minimize the number of connections you need to make.

What are the requirements of the API? Some APIs may have limits on the number of requests you can make per minute or per day, so you may need to consider this when deciding how to retrieve the data.

In general, it's a good idea to profile your code and measure the performance of different approaches to see which one is the most efficient for your specific use case.


There are a few different factors to consider when deciding how to process data from an API:

The size of the data: If you are dealing with a large amount of data, it may be more efficient to retrieve it in larger chunks and process it in batches rather than making a separate API call for each record. This can help reduce the overall number of API calls and reduce the time required to retrieve and process the data.

The structure of the data: If the data is structured in a way that makes it easy to identify individual records within a larger chunk of data, then it may be more efficient to retrieve and process the data in larger chunks. On the other hand, if the data is not well-structured or if it is difficult to identify individual records, it may be more efficient to retrieve and process the data one record at a time.

The frequency of updates: If the data is being updated frequently, then it may be more efficient to retrieve and process the data one record at a time in order to get the most up-to-date information.

The number of API calls: If making a large number of API calls is a concern, then it may be more efficient to retrieve and process the data in larger chunks in order to reduce the overall number of API calls.

Ultimately, the best approach will depend on the specific requirements and characteristics of your application and the data you are working with. It may be helpful to try out different approaches and see which one works best in your particular case.
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain


Return to FiveWin for Harbour/xHarbour

Who is online

Users browsing this forum: No registered users and 56 guests