Data Management & Integrations
Share your questions and best practices on the Gainsight Analyzer, Adoption Explorer, GDM, API, Person model, or anything related to integrations.
- 346 Posts
- 958 Replies
Hi Everyone! I am curious if anyone has completed an integration with Qualtrics to pull in survey data, detractor cases, and other data? If so, I would also be interested to know if you made a direct connection or used third party middleware. Our use case is one where a different team manages Qualtrics, and we simple want to start by bringing the data into Gainsight so we can add automated health score metrics. Once that is done we will look at further use cases. Any information would be very appreciated!
Hello - Wondering if anyone has come across this, and has a potential work around. We have a few S3 exports that we have set up to provide to our data team. We noticed that whenever we add a new field to the file export, that it typically shows up in the middle of the file. This requires our data team to build a new file to deal with this change. Expected behavior would be that any field that I add, shows up at the end of the data file. Is there something I’m not doing that is causing this? Seems that maybe it’s an issue where some fields have a label that’s been manually created, and these new ones don’t?
In the Snowflake Schema - CLOSE_DATE, CREATED_DATE, & MODIFIED_DATE_LAST are all DATE FormatWhen I query in Snowflake – I am receiving DATE-formatted data, ie. 2021-01-28When I add this OBJECT in the SNOWFLAKE CONNECTOR – those 3 fields are showing a data type of “date-time”I have tried to ingest that data into a destination of an MDA Object with the fields formatted as DATE on the MDA object and no values come over for these fields and then formatted as DATE-TIME on the MDA Object, no values come over for these fields.It appears I cannot change the field type on the object when I add it to the snowflake connector.Has anyone seen anything similar and/or have any guidance?
To create a dependent field in Gainsight, here's a step-by-step guide on how to achieve this: Access the Gainsight Administration area and navigate to the object that contains the fields you want to set the dependency on (in this case, the object containing the "Lifecycle" and "Churn Reason" fields). Locate the "Lifecycle" field and click on it to edit its properties. Set up the field dependency by selecting the "Churn Reason" field as the dependent field and the "Lifecycle" field as the controlling field. Specify the conditions for the dependency. In this case, you want the "Churn Reason" field to be required only when the "Lifecycle" field is set to "Contract End". Save the field dependency settings. Now, navigate to the "Churn Reason" field and set it as a required field in its properties. This ensures that when the condition specified in the field dependency is met, the "Churn Reason" field becomes mandatory. Save the changes to the "Churn Reason" field. With these step
Hello Team, I am trying to get the list of Task details and the associated Playbook details. I can do this in SFDC version by fetching the info from JBCXM_PlaybookTasks__c in SFDC Version. But in NXT version I don't see an option to fetch the information I am looking for. Scenario: We are using Gainsight NXT version in our company and we are trying to merge a business who uses Gainsight SFDC Version in to our Gainsight NXT version. So, basically we need to associate all the assets from the Gainsight SFDC Version to the Gainsight NXT version. Now when I try to fetch the list of Tasks and it's associated playbooks in Gainsight NXT Version, I don't see any MDA which holds the Task data.
Read API: Generating single multi-picklist id which is not useful while fetching multi picklist in a read API if it has more than 1 value then it is generating one single GSID which is not useful. It would be great to have values of a multi-picklist while fetching the data through an API. usecase: While fetching the data user should understand what they have fetched through Read API.
I'm having an issue with updating values in the "Fields to Set" section of Google Tag Manager for event parameters in GA4.I'm using GTM to send data to GA4, I have a configuration tag in GTM where I'm using the "Fields to Set" section to assign global values to event parameters. These values are pushed to dataLayer and updated at which point an even triggers to have the config tag reload the values.I can confirm that the dataLayer is being updated correctly, When the values are updated in the dataLayer, I trigger the configuration tag to fire, which I can see in the GTM debug mode this seems to be working just fine.However, even though the GTM debug mode shows the correct updated values in the fired tag, all of my events still shows the initial values for the event parameters.I've tried to troubleshoot the issue but haven't found a solution yet, there isn't much information that I could find relating to fields to set online.I'm wondering why the updated values in the "Fields to Set" se
Hi Team,I am making REST API’s call to Gainsight objects and fetching the data.In order to make the API calls we need to generate the access token and pass it in the header.To generate the access token we are using grant_type=’refresh_token’ and passing new refresh token each time. But somehow getting issue as ‘ Invalid refresh token’ . Wanted to check is there any other way apart from the refresh token to generate the access token and get the authorization done?Let me know if any more details needed. Thanks in advance!!Priyanka
Hello -We are experiencing issues when viewing the Error Log from the Connector jobs. We’re seeing that the file would contain header labels that are shown with System Internal Names rather than logical field labels that we explicitly define in Data Management. This creates extra steps for the end user when attempting to do any type of troubleshooting as the Internal Names are not helpful if a sync failed. Instead of field labels, it contains a code with prefix 'gsd'. There is not a way for us to make any sense of this log to troubleshoot. User would have to navigate away to Data Management and then look for the Internal Name id to match it and then find the corresponding Field label - this is poor user experience.Any idea why the headers don't reflect naming similar to Field Names? It's not good user experience to keep having to reference Data Management to find and match names with the ids.
We have data that gets updated daily - and sometimes certain usage data is upserted.Is there a way to use two filters for upserting the data in AE vs Global filter.Example: count of a particular telemetry in history can change based on modified date and hence needs to be pulled into GS based on this field.However, the usage metric needs to use a different field (say created date) for timebased usage count. Has anyone encountered or solved a similar criteria? AE doesn’t seem to honor two different fields.
Our Customer Advocacy teams would like to implement a points tracking system where customers can earn points by providing a reference or case study, but also redeem points for custom training or swag. Has any one implemented anything similar to this using Gainsight? I am thinking about creating a custom object for this where a record is created for each new points addition/subtraction. Then using reporting to get a sum of current points for each customer. I am just concerned that this might be too manual and trying to figure out what sort of automation can be added. Would appreciate any thoughts or advice. Thanks!
Hi, Now we have Person Object part of every C360 and R360, our CSMs add new users and assign them to various accounts where relationships exist. So one user (email address) can be part of multiple C360s as well as R360s. We have several other objects where we store users data using their email address e.g. product usage stats and webinars. Historically, we would always load SFDC Account ID as part of every object – this field would come prepopulated via our data lake and the mapping would have been done either before in enters the data lake or in the data lake so it would be already available in the dataset when it comes to Gainsight. But now, we manage users mapping via Person object, I’d like to switch the “email -> SFDC Account ID” mapping to reflect the one in Company Person/Relationship Person. In other words, if a contact is associated with 5 different C360s and they attend a webinar, I would like this information to appear in all 5 C360s (multiplied), without
We define a “role” for “Company/Relationship Person” and the value for this role is managed by a pick list. I can generate a report from Company/Relationship Person and include this field in a report.The problem I’m trying to solve is when I add these fields in “Show Me”:Company Name Role1 Role2It results in a report that has multiple lines per Company (one for each Person with that role). What I want is one line per company with totals for each role.How could I accomplish this?
I was reading this documentation and it says that the default 'App' pane size does not support the Latest/Recent Timeline Activities, Health Score with History, Customer Journey and Reports widgets however with Zendesk Agent Workspace enabled in Zendesk resizing of the Apps pane would be possible and these widgets would be available while expanding the pane.I have enabled Zendesk Agent Workspace and upgraded to Zendesk 2.0 connector in Gainsight, but now even if I increase the pane, the healthscore chart remains blank.
I understand that refreshing any sandbox is a great way to know that the configuration you are testing with is the same as the configuration in Production. However, the issue is refresh wipes out Sandbox changes. If we are spinning out the sandbox for our own development in first place. Why we want to get the latest changes from Production?Can this practice be made optional?
During our initial implementation we created a usage table that's populated via an S3 ingest job. We recently added a phase 2 for metrics to include in the existing table, which are new columns that have been added to the table. Future data files will map to the new columns, but we are looking to backfill historical data for the new columns. In order to backfill historical data for the new metrics, I wanted to check if there is a best way to do this that keeps the existing metrics as is and only updates the new columns with the historical data. My initial thinking is to create an S3 ingest rule that uses Date and SFDC ID as the unique identifiers, and only map the columns I want to upsert, leaving all other columns I want unchanged as "None." Would this be the best way to accomplish this?
The recent update has made the look and functionality poorer than what it was in previous version. I noticed that the names also changed. Object names changed from schema to fields and Data to Data operations. https://share.getcloudapp.com/2NumknLD and https://share.getcloudapp.com/9ZuoGbZE. Also, it is no longer possible to sort on any fields (acct name, email etc.). The space consumed horizontally and vertically, by each field is too much and thus we can’t see maximum records at any given time and requires too much scrolling.
I am interested in migrating an existing Microsoft SQL Server system with a lot of stored procedures to the hadoop ecosystem. Is there an existing tool/tool suggestion that is good for migrating the stored procedures to the hadoop eco system ?
Currently, the data designer offers merge only on the basis of equal to operator. I want to merge on the basis on ‘contains’ operator. Since this functionality is not yet available, is there a workaround available?For example, Field1 in Object1 has values V1,V2. I want to merge it with Object2 based on Field2 with value V1. How do I achieve this?
If you ever had a profile with us, there's no need to create another one.
Don't worry if your email address has since changed, or you can't remember your login, just let us know at email@example.com and we'll help you get started from where you left.
Else, please continue with the registration below.
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.