Import a file from S3 to Bionic Rules directly

Related products: None

I've been hearing multiple use cases / suggestions around being able to import the data file from S3 into Bionic Rules. 





1) Eliminate the need for setting up S3 jobs and then running the rule separately and potentially avoid 24 hours of delay before data is processed and available for consumption.





2) Loading data to SFDC directly without needing to store in MDA as a staging area





3) Without having to think about missing schedules. With Bionic Rules, reading the data from S3 and processing can be one rule.





4) Avoid storing unwanted data. Dimensionality reduction, filtering the bad quality records, applying the transformations.





5) Merging multiple files into one data set before loading. 





What are the other supporting use cases that you can think of?
Love the general idea!  Turns Bionic rules into a 1-stop shop for data ingest and transformation.





In particular, #'s 1, 3, and 5 get my vote for the strongest use cases in the list. Basically sets up a way to allow data transformation, if nothing else, to occur immediately following the arrival of new data.





I'd want to talk through #2, 'load directly to SFDC' in a very detailed fashion - a lot of implications there for consequences to the SFDC-side data that may not have any bearing on Gainsight, which always makes me nervous - basically would want that to be a feature added later.





One other use case:





-- Allowing an opportunity to load custom field mappings (static data points like "load date" or "data source") as part of a file ingest as well - formula fields with Today(), for instance, for Load Date, etc.
Thanks Ashok for bringing this up! Can't agree more, I would love to hear more perspectives.
Thanks Ashok for the idea!


(1-3) will address missing connect flight scenarios and in turn reduces data issues. I believe, it's one of the awaiting feature for many of the customers.
Another use case is that for standalone customers, for loading Customers list, currently I have to load to MDA object (staging), load to Customer then load to Company via Gainsight Connect. If we have S3 as source, then directly I can load to Customer Info.
Thanks Ashok for the ideas!


Can't wait for #1,4,5 to happen. There would be tremendous productivity benefits with these ideas. I believe some of them may be in the pipeline. But overall, I like the way Gainsight (You) take client feedback seriously.


Keep up with the good work!
Thanks everyone for your inputs.





With Winter release, you will be able to import data directly from S3 file and be able to do transformations on that.





There is a new type of task(s3) in bionic rules. This task will read an s3 file (user will get the flexibility to define the format of the s3 file similar to export to S3 functionality that we have today) and load show fields based on the columns in the s3 file. Once the file is loaded, the show fields will be available for selection by user.





Once a file is loaded, show fields cannot be synced with the S3 file again. It always has to be a new task. 
@Ashok - would this solution allow us to connect to a client's own S3 bucket? If so, that's a use case one of my clients would love to see. They are concerned about data at rest and outside their custody.
@Tanya - Current release does not support reading from client's own S3 bucket but this is something being planned in the near term roadmap.
I tried this today, selecting the "GainsightManaged" and clicking either Load Column Details or Preview, but I keep getting an error "Error occured while loading the file. The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 6C0DCF3B759BA46B)"





Meanwhile, S3 jobs in the Connectors tab are running successfully without issue.
Tried it again in a different (demo) org....same error:





"Error occured while loading the file. The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: F3F4D19E1C2050D0)"





I was able to do an S3 Connector import without a problem after this.
Hi Jeff, 





Can you provide a screenshot of the s3 path that you were trying to configure and also a screenshot of the gainsight managed bucket where the file is placed. Agreed the messaging is bad, but what that means is either the configured path is wrong or the file does not exist.





Regards,


Jitin














Hey Jeff, 





Try like what I have in my dev which works.








I placed in the same input directory as you, except I have different file name. In case the picture looks a little small, may path is.





MDA-Data-Ingest/input/Export from Bionic-2018-02-20.csv
Tried that too.  Gives me a little different error though.











PS I don't have an Export directory - is that necessary?





Your example in the S3 Dataset Task in Bionic Rules documentation doesn't show having to add all the folders in the path...just the filename (I took screenshots from the.gif on that doc):


















Thanks for the update Jeff.





I see that you have a support ticket opened therefore I shall look into that ticket. Doc has been flagged for update due to the same reason you have come upon, FYI.





Best,





Kevin Ly
Well dang it I was moving so fast trying to juggle multiple things I didn't realize I failed to select the Gainsight Bucket in the last attempt where I added the MDA-Data-Ingest/input path. 





Thanks for the catch.  And thanks for flagging the support doc to have it updated.  I was originally going off that doc which is why it was failing to begin with.  An explicit reference in the verbiage to say "You must enter the full path" would be good.
I noticed that this no longer supports the MM/DD/YYYY format. for instance my dates are stored as 03/13/2018 and it is giving me an error unless I trim off the leading zeros. Is this a known issue? THis is a standard date format used by most US customers.
Hi All , Enhanced Support for Date and DateTime formats: Previously, only a few Date and DateTime formats were supported in an S3 Dataset task. Support has been extended to include many more Date and Datetime formats.





Please refer the release notes for more information. 





Thanks for posting!​