Data Management & Integrations
Share your questions and best practices on the Gainsight Analyzer, Adoption Explorer, GDM, API, Person model, or anything related to integrations.
- 293 Posts
- 829 Replies
Currently i have 2 machines one of them is the Horton sandbox i have configured it as name node and decommissioned the data node from it and other machine which i have made and made it as a data node and i have installed hive server on it. Also and assigned the slave role to it and i used Ambari to finish it . My question is as its my first time ever to use hadoop my plan is to transfer data from sql database to the hadoop so does this mean i have to install mysql on datanode while i will be using sqoop and other thing what will the name node do ?shall i query it and it passes the queries to the datanode am really very much confused and really having huge pressure to finish so forgive me as am newbie the installations of the machines are all default i have chosen datanode for the First machine and nodemanager for the second one with no special configurations appreciate if You have a simple example from which i can understand . Thanks alot fellows
We want to trigger a process in SFDC where CSMs add a record to a low volume object, and then a ticket for a different team is generated from the MDA record in SFDC. Is it possible to have SFDC trigger off of this directly from GS? We do not want to recreate the object in SFDC and create the admin overhead of multiple custom objects and rules
Hello -We are experiencing issues when viewing the Error Log from the Connector jobs. We’re seeing that the file would contain header labels that are shown with System Internal Names rather than logical field labels that we explicitly define in Data Management. This creates extra steps for the end user when attempting to do any type of troubleshooting as the Internal Names are not helpful if a sync failed. Instead of field labels, it contains a code with prefix 'gsd'. There is not a way for us to make any sense of this log to troubleshoot. User would have to navigate away to Data Management and then look for the Internal Name id to match it and then find the corresponding Field label - this is poor user experience.Any idea why the headers don't reflect naming similar to Field Names? It's not good user experience to keep having to reference Data Management to find and match names with the ids.
My team has never used the “active” checkbox in SFDC because they are worried it won’t be accurate in GS. Has anyone run into any issues with the accuracy in checking the active box in SFDC? We want to pull data on only active accounts, but my SFDC team just let me know they have never used that box. Thoughts?
When creating a data designer and using ‘count of’ for an Email field there is an issue in the filter where it is still showing a user lookup, even though the field is now a number field. This occurs when using the “equals” operator. It works in rules engine but not in data designer. The customer was wanting to filter for “Equals zero”, so as a workaround we used “less than 1.” Is this expected behavior?
All the data management reports within Gainsight NXT need to have the ability to be “sticky”. I am referring to the User Management and Company-->Data Operations pages at least (They seem to be using the same reporting/code structure). In both pages, the search is not sticky so anytime you refresh the page or go into a record to edit it, the page resets completely and you lose all your filters, search results, or place within the search results. This makes it very difficult to manage records when you dont want to create a rule to change the values for only a few records.
I understand that refreshing any sandbox is a great way to know that the configuration you are testing with is the same as the configuration in Production. However, the issue is refresh wipes out Sandbox changes. If we are spinning out the sandbox for our own development in first place. Why we want to get the latest changes from Production?Can this practice be made optional?
Hey GuysGainsight is a popular platform, and we all are aware of its awesome services. Gainsight provides tools to help customer success managers (CSMs) manage customer health, track customer engagement, and measure customer lifetime value (CLV). In addition, Gainsight provides a set of APIs and SDKs that allow developers to build custom applications on top of the Gainsight platform. The company features a suite of products that enable its customers to increase customer retention and expansion.Gainsight provides an amazing suite of products that help customer success managers increase customer retention and expansion. Additionally, the company features a set of APIs and SDKs that allow developers to build custom applications on top of the Gainsight platform.Gainsight is continuing to innovate and add new features to its products. The company recently announced the addition of a new customer success metric called the Net Promoter Score. This score will help customer success managers mea
Are there plans to provide additional out of the box integrations or third party development of integrations? When I think of most any other SaaS product I use, they all have app stores with 10s of 100s of integrations (Salesforce AppExchange, ServiceNow, Splunk, even more nascent services like Clickup, Pendo, etc). With just over a dozen currently, I feel like there is a big lack of native integrations and I’m curious if this is on the radar at all to improve this area?
I am interested in migrating an existing Microsoft SQL Server system with a lot of stored procedures to the hadoop ecosystem. Is there an existing tool/tool suggestion that is good for migrating the stored procedures to the hadoop eco system ?
Hi Everyone! I am curious if anyone has completed an integration with Qualtrics to pull in survey data, detractor cases, and other data? If so, I would also be interested to know if you made a direct connection or used third party middleware. Our use case is one where a different team manages Qualtrics, and we simple want to start by bringing the data into Gainsight so we can add automated health score metrics. Once that is done we will look at further use cases. Any information would be very appreciated!
Within Adoption Explorer, how can add a filter or otherwise limit the number of rows the AE job reads?My data team recently increased an already sizable table by 4x, bringing it close to 8 million rows daily. I know I will never need 7 million of those rows, because they include data not needed by Gainsight. However, I haven’t found a way to filter the number of rows the AE job reads.With a “read” filter, or some pre-ingest filter in place, I’d benefit from faster ingests and also a reduction in the processing power required by the AE tool by identifying logically which rows I know I want AE to process further. By “dropping rows” at the initial read, I can process and ingest a more targeted dataset, improving my ingest speed and decreasing the processing load.
I am trying to use Spring Vault to provide a centralized service which provides storing and retrieving credential information capability for our micro-service eco-system. However our organization currently using cyber-ark vault for centralizing credentials so what I am looking for is to build a abstraction service which base on Spring Vault and use cyber-arkas storage engine for Harshicorp Vault.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.