Import a file from S3 to Bionic Rules directly
None
I've been hearing multiple use cases / suggestions around being able to import the data file from S3 into Bionic Rules.
1) Eliminate the need for setting up S3 jobs and then running the rule separately and potentially avoid 24 hours of delay before data is processed and available for consumption.
2) Loading data to SFDC directly without needing to store in MDA as a staging area
3) Without having to think about missing schedules. With Bionic Rules, reading the data from S3 and processing can be one rule.
4) Avoid storing unwanted data. Dimensionality reduction, filtering the bad quality records, applying the transformations.
5) Merging multiple files into one data set before loading.
What are the other supporting use cases that you can think of?
1) Eliminate the need for setting up S3 jobs and then running the rule separately and potentially avoid 24 hours of delay before data is processed and available for consumption.
2) Loading data to SFDC directly without needing to store in MDA as a staging area
3) Without having to think about missing schedules. With Bionic Rules, reading the data from S3 and processing can be one rule.
4) Avoid storing unwanted data. Dimensionality reduction, filtering the bad quality records, applying the transformations.
5) Merging multiple files into one data set before loading.
What are the other supporting use cases that you can think of?
Sign up
If you ever had a profile with us, there's no need to create another one.
Don't worry if your email address has since changed, or you can't remember your login, just let us know at community@gainsight.com and we'll help you get started from where you left.
Else, please continue with the registration below.
Welcome to the Gainsight Community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
In particular, #'s 1, 3, and 5 get my vote for the strongest use cases in the list. Basically sets up a way to allow data transformation, if nothing else, to occur immediately following the arrival of new data.
I'd want to talk through #2, 'load directly to SFDC' in a very detailed fashion - a lot of implications there for consequences to the SFDC-side data that may not have any bearing on Gainsight, which always makes me nervous - basically would want that to be a feature added later.
One other use case:
-- Allowing an opportunity to load custom field mappings (static data points like "load date" or "data source") as part of a file ingest as well - formula fields with Today(), for instance, for Load Date, etc.
(1-3) will address missing connect flight scenarios and in turn reduces data issues. I believe, it's one of the awaiting feature for many of the customers.
Can't wait for #1,4,5 to happen. There would be tremendous productivity benefits with these ideas. I believe some of them may be in the pipeline. But overall, I like the way Gainsight (You) take client feedback seriously.
Keep up with the good work!
With Winter release, you will be able to import data directly from S3 file and be able to do transformations on that.
There is a new type of task(s3) in bionic rules. This task will read an s3 file (user will get the flexibility to define the format of the s3 file similar to export to S3 functionality that we have today) and load show fields based on the columns in the s3 file. Once the file is loaded, the show fields will be available for selection by user.
Once a file is loaded, show fields cannot be synced with the S3 file again. It always has to be a new task.
Meanwhile, S3 jobs in the Connectors tab are running successfully without issue.
"Error occured while loading the file. The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: F3F4D19E1C2050D0)"
I was able to do an S3 Connector import without a problem after this.
Can you provide a screenshot of the s3 path that you were trying to configure and also a screenshot of the gainsight managed bucket where the file is placed. Agreed the messaging is bad, but what that means is either the configured path is wrong or the file does not exist.
Regards,
Jitin
Try like what I have in my dev which works.
I placed in the same input directory as you, except I have different file name. In case the picture looks a little small, may path is.
MDA-Data-Ingest/input/Export from Bionic-2018-02-20.csv
PS I don't have an Export directory - is that necessary?
Your example in the S3 Dataset Task in Bionic Rules documentation doesn't show having to add all the folders in the path...just the filename (I took screenshots from the.gif on that doc):
I see that you have a support ticket opened therefore I shall look into that ticket. Doc has been flagged for update due to the same reason you have come upon, FYI.
Best,
Kevin Ly
Thanks for the catch. And thanks for flagging the support doc to have it updated. I was originally going off that doc which is why it was failing to begin with. An explicit reference in the verbiage to say "You must enter the full path" would be good.
Please refer the release notes for more information.
Thanks for posting!