Tuesday 30 January 2018

Introduction to ReactJS

Overview of ReactJS:

React.js is a JavaScript library developed by Facebook. React is a library for building composable UIs. It empowers the creation of reusable UI segments which present data that changes over time. Many of people make use of React as the V in MVC. React abstracts away the DOM from you, resulting a complex programming model and better execution. React can also render on the server using Node, and it can power local apps using React Native. React actualizes one-way reactive data stream which reduces standard and is easier to reason about than traditional data binding.

Why people choose to program with React?

Fast - Feel quick and responsive through the Apps made in React can handle complex updates.
Modular - Allow you to write many smaller, reusable files, instead of writing large, dense files of code. Modularity of React is a beautiful solution for JavaScript's viability issues.
Scalable - React performs best incase of large programs that display a lot of data changes.
Flexible - React approaches differently by breaking them into components while building user interfaces. This is incredibly important in large applications.
Popular - ReactJS gives better performance than other JavaScript languages due to t’s implementation of a virtual DOM.

React Features:

1. JSX − JSX is preferable choice for many web developers. It isn't necessary to use JSX in React development, but there is a huge difference between writing react.js documents in JSX and in JavaScript.
2. Unidirectional data flow and Flux − React.js is designed in such a manner that it will only support data that is flowing downstream, in one direction. If the data has to flow in another direction, you will need additional features.
3. Components − Components are the heart and soul of React. Components (like JavaScript functions) let you split the UI into independent, reusable pieces, and think about each piece in isolation.


If you want more visit Mindmajix 

 
Author
Lianamelissa is Research Analyst at Mindmajix. A techno freak who likes to explore different technologies. Likes to follow the technology trends in market and write about them

Monday 29 January 2018

Cognos Components and Services

There are different segments in Cognos that speak with each other utilizing BI Busand are known as Simple Object Access Protocol (SOAP) and backings WSDL. BI Bus in Cognos engineering isn't a product part however comprises of an arrangement of conventions that allows correspondence between Cognos Services.

The processes enabled by the BI Bus protocol includes:

  •  Messaging and dispatching
  •  Log message processing
  •  Database connection management 
  •  Microsoft .NET Framework interactions
  •  Port usage 
  •  Request flow processing
  •  Portal Pages 


When you install Cognos 8 using the Installation wizard, you specify where to install each of these components:

Gateways 

The Cognos 8 Web server tier contains one or more Cognos 8 gateways. The web communication in Cognos 8 is typically through gateways, which reside on one or more web servers. A gateway is an extension of a web server program that transfers information from the web server to another server. Web communication can also occur directly with a Cognos 8 dispatcher but this option is less common.

Cognos 8 supports several types of Web gateways, including –

CGI:The default passage, CGI can be utilized for all upheld Web servers. Be that as it may, for improved execution or throughput, you may pick one of the other upheld portal composes.

ISAPI:  This can be used for the Microsoft Internet Information Services (IIS) Web server. It delivers faster performance for IIS.

Application Tier Components 

This part comprises of a dispatcher that is dependable to work administrations and course asks. The dispatcher is a multi strung application that utilizations at least one strings for every demand. The design changes are routinely imparted to all the running dispatchers. This dispatcher incorporates Cognos Application Firewall to give security to Cognos 8.


The dispatcher can course demands to a nearby administration, for example, the report benefit, introduction benefit, work administration, or screen benefit. A dispatcher can likewise course demands to a particular dispatcher to run a given demand. These solicitations can be steered to particular dispatchers in light of load-adjusting necessities, or bundle or client aggregate prerequisites.

Author

Lianamelissa is Research Analyst at Mindmajix. A techno freak who likes to explore different technologies. Likes to follow the technology trends in market and write about them.


Friday 26 January 2018

Introduction to IBM Cognos BI

IBM Cognos 8 BI suite is a standout amongst the most generally utilized business insight. Programming is a genuinely complete, yet sensible, and one of the IBM Cognos BI advertise pioneers.
The principle applications are utilized from a web-based interface that controls the Business Intelligence Server, which is the core of the apparatus.
This gateway is called Cognos Connection and from the same, gave by web, you can get to ecological administration alternatives and administrations to the diverse applications that gives Cognos, the organizer structure that sorts out the reports, to dashboards, and different embellishments that can be incorporated in the entryway.
Every application is intended to cover a sort of requirements that frequently happen in such conditions. Most handles 100% from the web program, both to create and configuration reports, occasions and measurements to counsel or investigation assignments.
These are the main tools provided in the suite:
IBM Cognos Query Studio
IBM Cognos Report Studio
IBM Cognos Analysis Studio
IBM Cognos Event Studio
IBM Cognos Metric Studio
IBM Cognos Powerplay Transformer
IBM Cognos Framework Manager
IBM Cognos Planning
IBM Cognos TM1…..

IBM Cognos Report Studio

Is the principle application for making reports. Question Studio resembles, however it is considerably more entire.
On the left demonstrates a protest program from which you can get to the information structure, and different articles embedded into the reports. To the privilege is the report outline territory where you can drag these items and go to make the structure.
These articles can be of various sorts: information source, specifics of the report and configuration apparatuses. Each protest that is inserted in the report has configurable properties, and through these you can achieve an abnormal state of customization, both in the information appeared in the outline of the configuration.
You can work with both social information structures as dimensional structures, you simply need to remember that relying upon the kind of cause there are contrasts in the properties relevant to the information, and even in conduct in the outline region . In spite of the fact that not required to do as such, to indicate dimensional structure information, it is more proper to utilize crosstab type reports. You can look over a few sorts of fundamental structure for revealing.
There are diverse sorts of charts, and even maps that can be incorporated into the reports, show or store independently to shape some portion of a scorecard that would be shown on the site.
The choices for the utilization of parameters and prompts are likewise genuinely total, albeit how they are characterized isn’t exceptionally instinctive and is to some degree unwieldy.
As in every one of these kinds of devices, you can characterize channels, arranging, gathering, and working with totals, make subtotals, ascertained fields, restrictive organizing. You can likewise empower the bore up and penetrate down, and utilizing drill through.
The meeting finished with the operational starting points SQL and MDX dimensional models utilized. The subsequent inquiry can be seen and even altered and changed straightforwardly.

IBM Cognos Metric Studio

It is the apparatus utilized for the development of measurements and scorecards.
Studio With Metric characterizes the KPIs or Key Performance Indicators of the business, are composed and identify with each other, are related with various profiles, and screen, permitting consistently to analyze against execution goals, and characterize computerized activities, for example, warnings if there should arise an occurrence of deviations.
With these measurements are fabricated dashboards that empower operational level to screen execution against targets, and vital level “guide” of corporate methodology and encourage its transmission to all levels of the association.
The metric can be developed from various information sources, for example, OLAP solid shapes, social databases, spreadsheets, content records, and even qualities detailed physically, and the device highlights wizards to encourage the development of measurements and dashboards.
Author
Lianamelissa is Research Analyst at Mindmajix. A techno freak who likes to explore different technologies. Likes to follow the technology trends in market and write about them.

Wednesday 24 January 2018

Apache Kafka

3 industries depend on Apache Kafka

Apache Kafka is a disseminated distribute buy in informing framework intended to be quick, adaptable, and solid. It gives a brought together, high-throughput, low-dormancy stage for taking care of continuous information encourages and has a capacity layer that is basically a greatly versatile bar/sub message line architected as a circulated exchange log. That design makes Kafka, which was initially created by LinkedIn and made open source in mid 2011, profoundly important for big business frameworks to process gushing information. 
Initially, Kafka was worked for site action following—catching every one of the snaps, activities, or contributions on a site and empowering various "shoppers" to buy in to ongoing updates of that data. Presently, nonetheless, organizations in web administrations, budgetary administrations, amusement, and different enterprises have adjusted Kafka's greatly versatile engineering and connected it to significant business information. 
Kafka gives endeavors from these verticals a chance to take everything occurring in their organization and transform it into ongoing information streams that various specialty units can buy in to and dissect. For these organizations, Kafka goes about as a substitution for conventional information stores that were siloed to single specialty units and as a simple method to bring together information from every single diverse framework. 
Kafka has moved past IT operational information and is currently additionally utilized for information identified with shopper exchanges, money related markets, and client information. Here are three ways distinctive businesses are utilizing Kafka.

Internet services

A main internet service provider (ISP) is utilizing Kafka for benefit initiation. At the point when new clients agree to accept web access by telephone or on the web, the equipment they get must be approved before it can be utilized. The approval procedure creates a progression of messages, at that point a log authority assembles that log information and conveys it to Kafka, which sends the information into various applications to be handled. 
The advantage of utilizing Kafka along these lines is that the IT stage can play out an activity for a buyer—actuating administration—and convey information to an investigation application so that the ISP can break down enactments by land region, rates of initiation, and substantially more. 
Before Kafka, catching and directing information to various divisions required designing, business knowledge, and separate pipelines copying the information. Kafka now fills in as the single wellspring of truth that not just catches information on what's new with the application yet in addition with what's new with clients.

Financial services

Worldwide monetary administrations firms need to dissect billions of every day exchanges to search for showcase patterns and remain over fast and continuous changes in budgetary markets. One firm used to do that by gathering information from different specialty units after the end of the market, sending it to an immense information lake, at that point running examination on the caught information. 
To move from a receptive way to deal with constant investigation of the approaching business sector information, Kafka is filling in as the informing merchant to house operations information and other market-related budgetary information. Presently, rather than dissecting sometime later operational information, the company's experts can keep their finger on how advertises are getting along progressively and settle on choices in like manner. 
A case of a monetary firm utilizing Kafka is Goldman Sachs, which drove improvement of Symphony, an industry activity to construct a cloud-based stage for moment correspondence and substance sharing that safely interfaces showcase members. It depends on an open source plan of action that is financially savvy, extensible, and adjustable to suit end-client needs.

Entertainment

An excitement organization with an industry-driving gaming stage must process progressively a large number of exchanges every day and guarantee it has a low drop rate for messages. Before, it utilized Apache Spark, a capable open source handling motor, and Hadoop, yet it as of late changed to Kafka. 
Strikingly, the organization is utilizing Kafka as a protection strategy for this information, since Kafka will safely store information in a meaningful arrangement as long as the organization needs it to. This empowers the organization to both course messages by means of a streamlined design and store information through Kafka; in case of a calamity or basic mistake, it can recoup the information and investigate. 
Netflix utilizes Kafka as the entryway for information accumulation for all applications, requiring many billions of messages to be handled day by day. For instance, it's utilizing Kafka for bound together occasion distributing, accumulation, directing for cluster and stream preparing, and impromptu informing. 
In the cloud and on-premises, Kafka has moved past its underlying capacity of site action following to wind up plainly an industry standard, giving a dependable, streamlined informing answer for organizations in an extensive variety of businesses.
Author
Lianamelissa is Research Analyst at Mindmajix. A techno freak who likes to explore different technologies. Likes to follow the technology trends in market and write about them.

Tuesday 23 January 2018

Salesforce interview questions

What Is Salesforce?

What is Salesforce? The following is a picture which demonstrates the energy of Salesforce in the present technically knowledgeable world. From tech mammoths like Google and Facebook to your adjacent call focus, every one of them utilize Salesforce administrations and items to take care of their issues.
what is salesforce today - edureka
Salesforce began as Software as a Service (SaaS) CRM organization. Salesforce now gives different programming arrangements and a stage for clients and engineers to create and convey custom programming. Salesforce.com depends on multi-occupant design. This implies numerous clients share regular innovation and all keep running on the most recent discharge. You don't need to stress over the application or foundation redesigns – they happen consequently. This enables your association to concentrate on development as opposed to overseeing innovation.

What is the profile in Salesforce?A profile is a gathering/accumulation of settings and authorizations that characterize what a client can do in salesforce. A profile controls "Protest consents, Field authorizations, User consents, Tab settings, App settings, Apex class get to, Visualforce page get to, Page formats, Record Types, Login hours and Login IP ranges.

What is the difference between role and profile in Salesforce?A profile contains client consents and access settings that control what clients can do inside their association. Profiles control standard and custom applications, tabs, authorizations, Apex classes and Visualforce pages, and so on clients can see. A part controls the level of perceivability that clients have into your association's information

What is a workflow in Salesforce?Work process tenets can help computerize the accompanying sorts of activities in light of your association's procedures: Tasks: Assign another errand to a client, part, or record proprietor. Email Alerts: Send an email to at least one beneficiaries you indicate. Field Updates: Update the estimation of a field on a record.

If you want more visit Mindmajix

Author
Lianamelissa is Research Analyst at Mindmajix. A techno freak who likes to explore different technologies. Likes to follow the technology trends in market and write about them.

Monday 22 January 2018

Image Cryptography

All in all, we can state that pictures are previews recovered by camera-empowered sensors, as per the Field of View (FoV) of the camera and the coding qualities of the utilized equipment and programming. The determination, shading example and pressure quality rely upon such attributes and furthermore the application prerequisites: grayscale low-determination pictures might be appropriate for a few kinds of observing, while hued high-determination pictures are normal for top notch applications. 
The encryption of all recovered pictures at source hubs might be expensive in time and processing power, which may render its reception unfeasible for some sensor systems. Thusly, streamlining methodologies might be received to bring down the weight of picture cryptography in remote interactive media sensor systems. Among those methodologies, numerous works have misused the rule of specific encryption to accomplish that objective . Incomplete or specific encryption is an enhanced strategy that endeavors the attributes of media coding calculations to give mystery while lessening computational multifaceted nature . In handy means, just piece of the first information is ensured, however genuineness, privacy and honesty are as yet guaranteed. 
Some coding calculations are normally reasonable for particular encryption. Among them, most works have gathered endeavors in quadtree and wavelet-based coding calculations. Quadtree coding depends on a computational established tree, which breaks down the first picture into various sub-quadrants, and the importance of the quadrants for the recreation procedure is identified with their position in the tree . Then again, DWT-based (Discrete Wavelet Transform) calculations apply a wavelet change on the first information, creating an order of recurrence groups . Along these lines, in wavelet-based coding, the band of most elevated pressure level contains the most essential visual data for the remaking procedure. In the two cases, as there might come about parts with higher significance for the recreation of the first picture (at the goal), cryptography may be connected just finished those parts. Therefore, the general cryptography weight could be lessened, once picture coding is as of now required much of the time for pressure purposes. 
Cryptography may likewise be connected for watermarking of detected pictures, which is centered around confirmation. A computerized watermark is an uncommon marker that is implanted into scalar, sound, picture or video information, going for giving an instrument to distinguish proprietorship and copyright . The watermarking procedure will conceal validation data into unique information, which might be obvious or not. In reality, as it is moderately easy to actualize, watermarking is a strategy that has been utilized for quite a while, being prominent in Internet-based systems. In remote sensor organizes, this lightweight procedure can be important while giving validation . By and large, any picture transmission over remote sensor systems might be ensured utilizing watermarks.

Source Daniel G




Explore More about Crtptography Visit Mindmajix

Wednesday 17 January 2018

SAP MDM Interview Questions

If you're looking for SAP MDM Interview Questions for Experienced & Freshers, you are at right place. There are lot of opportunities from many reputed companies in the world. According to research SAP MDM has a market share of about 17.8%. So, You still have opportunity to move ahead in your career in SAP MDM. Mindmajix offers advanced SAP MDM Interview Questions 2018 that helps you in cracking your interview & acquire dream career as SAP MDM Developer.
Q. What is SAP Master Data Management?
SAP Master Data Management (SAP MDM) enables information integrity across the business network, in a heterogeneous IT landscape. SAP MDM helps to define the business network environment based on generic and industry specific business elements and related attributes – called master data. Master data, for example, cover business partner information, product masters,product structures, or technical asset information. SAP MDM enables the sharing of harmonized master data, formerly trapped in multiple systems, and ensures cross system data consistency regardless of physical system location and vendor. It helps to align master data by providing services that recognize identical master data objects and keep them consistent. In addition, it enables the federation of business processes, by providing consistent distribution mechanisms of master data objects into other systems, within the company and across company boundaries.
Q. What are SAP MDM’s major benefits?
SAP MDM: Helps companies leverage already committed IT investments since it complements and integrates into their existing IT landscape. Reduces overall data maintenance costs by preventing multiple processing in different systems. Accelerates process execution by providing sophisticated data distribution mechanisms to connected applications. Ensures information consistency and accuracy, and therefore reduces error-processing costs that arise from inconsistent master data. Improves corporate decision-making processes in strategic sales and purchasing by providing up-to-date information to all people.
Q. What are the MDM Business Scenarios?
1. Master Data Consolidation
2. Master Data Harmonization
3. Central Master Data Management
4. Rich Product Content Management
5. Customer Data Integration
6. Global Data Synchronization
Q. What is Master Data Consolidation?
In Master Data Consolidation scenario, users wield SAP NetWeaver MDM to collect master data from several systems at a central location, detect and clean up duplicate and identical objects, and manage the local object keys for cross-system communication.
Q. What is Master Data Harmonization?
In Master Data Harmonization scenario enhances the Master Data Consolidation scenario by forwarding the consolidated master data information to all connected, remote systems, thus depositing unified, high-quality data in heterogeneous system landscapes. With this scenario, you can synchronize globally relevant data across your system landscape.
Q. What are all the capabilities and functions of SAP NetWeaver MDM?
SAP NetWeaver MDM is used to aggregate master data from across the entire system landscape (including SAP and non-SAP systems) into a centralized repository of consolidated information. High information quality is ensured by syndicating harmonized master data that is globally relevant to the subscribed applications. A company’s quality standards are supported by ensuring the central control of master data, including maintenance and storage.
If you want more visit  Mindmajix

Author
Lianamelissa is Research Analyst at Mindmajix. A techno freak who likes to explore different technologies. Likes to follow the technology trends in market and write about them.


Thursday 11 January 2018

Confluence Interview Questions

If you're looking for Confluence Interview Questions for Experienced or Freshers, you are at right place. There are lot of opportunities from many reputed companies in the world. According to research Confluence has a market share of about 0.9%. So, You still have opportunity to move ahead in your career in Confluence Development. Mindmajix offers Advanced Confluence Interview Questions 2018 that helps you in cracking your interview & acquire dream career as Confluence Developer.
Q: What are the benefits of a team-work than working individually on a project?
Well, team-work is always a better approach and there is nothing wrong to say that when multiple minds work on a similar project, the outcomes are always superior. The prime factor is one can notice the mistake of others and can always give better suggestions on improving the final outcomes. Also, joint efforts always make sure of less errors and quick results irrespective of the project. A good team can always make sure of time management and delivery of project without deadline violation.
Q: What should be the features of a good collaboration software according to you?
Collaboration software is becoming extremely popular in the present scenario. They come with a lot of features. A good collaboration software should be reliable, user-friendly, as well as easy to use. In addition to this, it must be secure enough to be trusted by the organization. Moreover, it must have compatibility with the currently available technology or applications that are a must in any form of business.
Q: What sort of conflicts can be avoided by using Confluence?
Confluence is a popular application for team work. Although team work assure excellent outcome in every aspect, it is also true that there are a lot of conflicts and issues that can declare their presence. Confluence is capable to simply eliminate all such issues irrespective of their nature and source. Moreover, there are problems such as human errors, glitches related to applications and so on that can also be eliminated with this tool.
Q: What are the tasks that a collaboration software can perform easily and how they are beneficial?
The Team Collaboration software can perform a diverse array of tasks that are required to maintain business processes reliably. In addition to this, it can always make sure of productivity without compromising with anything. It can simply connect two different users irrespective of their location for sharing ideas, information, managing business processes and so on. Many time a project has different modules which are developed at different locations. A collaboration software is extremely helpful at such a stage as it can handle a lot of tasks easily that are required for the same purpose.
Q: In what way Confluence is time, as well as cost saving approach according to you?
Confluence simply eliminates the needs of making users visit or meet each other frequently when they are working on a similar project but are engaged in different departments or their location is geographically different. As ideas, discussions and other tasks can be managed simply through confluence, it saves a lot of time and cost too up to a good extent. There are certain features that are regarded as best in every aspect.
Q: Tell any three benefits of Confluence tool?
1. It saves time and efforts
2. Cut down the chances of all major errors
3. Avoids conflicts among the resources
4. Powerful enough to be trusted
5. Assures timely delivery of project 
If you want more visit   Mindmajix 

Explore more courses visit  Mindmajix   
 
Author
Lianamelissa is Research Analyst at Mindmajix. A techno freak who likes to explore different technologies. Likes to follow the technology trends in market and write about them.

Wednesday 10 January 2018

SSAS Interview Questions

If you're looking for SSAS Interview Questions for Experienced or Freshers, you are at right place. There are lot of opportunities from many reputed companies in the world. According to research SSAS has a market share of about 26.35%. So, You still have opportunity to move ahead in your career in SSAS certification guide. Mindmajix offers Advanced SSAS Interview Questions 2018 that helps you in cracking your interview & acquire dream career as SSAS Developer.
Q. What is SQL Server Analysis Services (SSAS)? List out the features?
Microsoft SQL Server 2014 Analysis Services (SSAS) delivers online analytical processing (OLAP) and data mining functionality for business intelligence applications. Analysis Services supports OLAP by letting us design, create, and manage multidimensional structures that contain data aggregated from other data sources, such as relational databases. For data mining applications, Analysis Services lets we design, create, and visualize data mining models that are constructed from other data sources by using a wide variety of industry-standard data mining algorithms.
Analysis Services is a middle tier server for analytical processing, OLAP, and Data mining. It manages multidimensional cubes of data and provides access to heaps of information including aggregation of data. One can create data mining models from data sources and use it for Business Intelligence also including reporting features.
Analysis service provides a combined view of the data used in OLAP or Data mining. Services here refer to OLAP, Data mining. Analysis services assists in creating, designing and managing multidimensional structures containing data from varied sources. It provides a wide array of data mining algorithms for specific trends and needs.
Some of the key features are:
1.Ease of use with a lot of wizards and designers.
2.Flexible data model creation and management
3.Scalable architecture to handle OLAP
4.Provides integration of administration tools, data sources, security, caching, and reporting etc.
5.Provides extensive support for custom applications
Q. What is the difference between SSAS 2005 and SSAS2008?
1.In 2005 its not possible to create an empty cube but in 2008 we can create an empty cube.
2.A new feature in Analysis Services 2008 is the Attribute Relationships tab in the Dimension Designer.to implement attribute relationship is complex in ssas 2005
3.we can create ONLY 2000 partitions per Measure Group in ssas 2005 and the same limit of partitions is removed in ssas 2008.
Q. What is OLAP? How is it different from OLTP?
1.OLAP stands for On-Line Analytical Processing. It is a capability or a set of tools which enables the end users to easily and effectively access the data warehouse data using a wide range of tools like MICROSOFT EXCELREPORTING SERVICES, and many other 3rd party BUSINESS INTELLIGENCE TOOLS.
2.OLAP is used for analysis purposes to support day-to-day business decisions and is characterized by less frequent data updates and contains historical data. Whereas, OLTP (On-Line Transactional Processing) is used to support day-to-day business operations and is characterized by frequent data updates and contains the most recent data along with limited historical data based on the retention policy driven by business needs.
Q. What is a Data Source? What are the different data sources supported by SSAS?
A DATA SOURCE contains the connection information used by SSAS to connect to the underlying database to load the data into SSAS during processing. A Data Source primarily contains the following information (apart from various other properties like Query timeout, Isolation etc.):
1.Provider
2.Server Name
3.Database Name
4.Impersonation Information
SSAS Supports both .Net and OLE DB Providers. Following are some of the major sources supported by SSAS: SQL Server, MS Access, Oracle, Teradata, IBM DB2, and other relational databases with the appropriate OLE DB provider.

If you want more visit  Mindmajix 
 
Author

Lianamelissa is Research Analyst at Mindmajix. A techno freak who likes to explore different technologies. Likes to follow the technology trends in market and write about them.

Tuesday 9 January 2018

Big Data solutions for SQL Server

Big Data is large amount of the data which is difficult or impossible for traditional relational database. Big data, the term has seen increasing use since the past few years. In this field, we review the various ways that big data is described and how Hadoop which is developed as a technology, is commonly used to process big data. In addition, we introduce Microsoft HDInsight, an implementation of Hadoop available as a Windows Azure service. Then we explore Microsoft PolyBase, an on-premises solution that integrates relational data stored in MICROSOFT SQL SERVER Parallel Data Warehouse (PDW) with non-relational data stored in a Hadoop Distributed File System (HDFS).

Big data

For several decades, many organizations have been analyzing data generated by transactional systems. This data has usually been stored in a relational database management systems. A common step in the development of a business-intelligence solution is weighing the cost of transforming, cleansing, and storing this data in preparation for analysis against the perceived value that insights derived from the analysis of the data could deliver. As a consequence, decisions are made about what data to keep and what data to ignore. Meanwhile, the data available for analysis continues to proliferate from a broad assortment of sources, such as server log files, social media, or instrument data from scientific research. At the same time, the cost to store high volumes of data on commodity hardware has been decreasing, and the processing power necessary for complex analysis of all this data has been increasing. This confluence of events has given rise to new technologies that support the management and analysis of big data.

Describing Big Data

The point at which data becomes big data is still the subject of much debate among data-management professionals. One approach of describing big data is known as the 3Vs: volume, velocity, and variety. This model, introduced by Gartner analyst Doug Laney in 2001, has been extended with a fourth V, variability. However, disagreement continues, with some people considering the fourth V to be veracity.
Although it seems reasonable to associate volume with big data, how is a large volume different from the very large databases (VLDBs) and extreme workloads that some industries routinely manage? Examples of data sources that fall into this category include airline reservation systems, point of sale terminals, financial trading, and cellular-phone networks. As machine-generated data outpaces human-generated data, the volume of data available for analysis is proliferating rapidly. Many techniques, as well as software and hardware solutions such as PDW, exist to address high volumes of data. Therefore, many people argue that some other characteristic must distinguish big data from other classes of data that are routinely managed.
Some people suggest that this additional characteristic is velocity or the speed at which the data is generated. As an example, consider the data generated by the Large Hadron Collider experiments, which is produced at a rate of 1 gigabyte (GB) per second. This data must be subsequently processed and filtered to provide 30 petabytes (PB) of data to physicists around the world. Most organizations are not generating data at this volume or pace, but data sources such as manufacturing sensors, scientific instruments, and web-application servers are nonetheless generating data so fast that complex event-processing applications are required to handle high-volume and high-speed throughputs. Microsoft StreamInsight is a platform that supports this type of data management and analysis.
Data does not necessarily require volume and velocity to be categorized as big. Instead, a high volume of data with a lot of variety can constitute big data. Variety refers to the different ways that data might be stored: structured, semistructured, or unstructured. On the one hand, data-warehousing techniques exist to integrate structured data (often in relational form) with semistructured data (such as XML documents). On the other hand, unstructured data is more challenging, if not impossible, to analyze by using traditional methods. This type of data includes documents in PDF or Word format, images, and audio or video files, to name a few examples. Not only the unstructured data problematic for analytical solutions, but it is also growing more quickly than file systems on a single server that it can usually accommodate.
Big data as a branch of data management is still difficult to define with precision, given that many competing views exist and that no clear standards or methodologies have been established. Data that looks big to one organization by any of the definitions we’ve described might look small to another organization that has evolved solutions for managing specific types of data. Perhaps the best definition of big data at present is also the most general. For the purpose of this chapter, we take the position that big data describes a class of data that requires a different architectural approach than the currently available relational database systems it can effectively support, such as append-only workloads instead of updates.

If you want more visit   Mindmajix    

Explore More Courses  Visit   Mindmajix
 
Author

Lianamelissa is Research Analyst at Mindmajix. A techno freak who likes to explore different technologies. Likes to follow the technology trends in market and write about them.

Monday 8 January 2018

WHAT IS QLIKVIEW AND ITS COMPONENTS

Qlikview does in-memory data processing, data integration and stores them. It can read data from files and relational databases. Supports creation and consumption of dynamic apps. It makes the data presentation in an interactive way.
QlikView has 3 major components:
1. QlikView Desktop: It is a development tool
2. QlikView Server: Stores QlikView applications
3. QlikView Publisher: Loads data from sources and publishes it to the clients.
  • QlikView – the data access solution that enables you to analyze and use information from different data sources.
  • A number of features have been added to QlikView 10 with the purpose of providing possibilities to add metadata to the QlikView document. Adding metadata remains entirely optional for the developer.
  • Fields can now be tagged with system defined and custom meta-tags. A number of system tags are automatically generated for the fields of a document when the script is executed.
  • In analogy to field comments it is also possible to read or set comments in source tables. The comments are shown in the Tables page of the Document Properties dialog and as hover tooltips in the Table Viewer.
  • Chart expressions can be given for an explanatory text comment. These are visible and editable in the Expressions page of the Chart Properties dialog,
  • The script editor has been redesigned. A number of new commands can be found in the menus, e.g. the ODBC administrator can now be opened from inside the script editor; also the 32 bit ODBC administrator can be opened from a 64 bit QlikView
  • The basic idea is that QlikView at script run spawns a second process – QVConnect – that in turn connects to the data source.  Two different QVConnect files are installed in the QlikView folder: QVConnect32.exe and QVConnect64.exe. It is also possible to develop custom connect programs.
  • The interpretation and transformation of data are now done in multiple threads, which speeds up the load process tremendously. This does not imply any changes to the load script, i.e. the load script is still sequential: no parallel branches can be defined.
  • In previous versions, Input Fields needed to be loaded in a well-defined order for their values to be correctly associated after a reload. The Input Field values were always associated with the same record number, which caused problems if the load order changed, for example by inserting new values.
  • This is a new file/stream format for high performance input to QlikView. A QVX formatted file contains metadata describing a table of data and the actual data. In contrast to the QVD format, which is proprietary and optimized for minimum transformations inside QlikView, the QVX format is public and requires a few transformations when exporting data from traditional database formats



If you want more visit   Mindmajix  
Author
Lianamelissa is Research Analyst at Mindmajix. A techno freak who likes to explore different technologies. Likes to follow the technology trends in market and write about them.

Sunday 7 January 2018

Microsoft Dynamics AX Interview Questions

If you're looking for Microsoft Dynamics AX Interview Questions for Experienced & Freshers, you are at right place. There are lot of opportunities from many reputed companies in the world. According to research Microsoft Dynamics AX has a market share of about 6.6%. So, You still have opportunity to move ahead in your career in Microsoft Dynamics AX. Mindmajix offers advanced Microsoft Dynamics AX Interview Questions that helps you in cracking your interview & acquire dream career.
Q. What is Microsoft Dynamics AX?
Microsoft Dynamics AX is multi-language, multi-currency, industry-specific, global ERP Product and one of the Microsoft’s Dynamics ERP Family.
Q. Difference between edit and display method
Display Indicates that the method’s return value is to be displayed on a form or a report.
The value cannot be altered in the form or report
Edit Indicates that the method’s return type is to be used to provide information for a field that is used in  in a form. The value in the field can be edited.
Q. Difference between perspectives and table collection
Perspectives can organize information for a report model in the Application Object Tree (AOT).
A perspective is a collection of tables. You use a report model to create reports.
Table collection is a collection of table, which sharing across all the virtual companies.
Q. Define IntelliMorph
IntelliMorph is the technology that controls the user interface in Microsoft Dynamics AX. The user interface is how the functionality of the application is presented or displayed to the user.
IntelliMorph controls the layout of the user interface and makes it easier to modify forms, reports, and menus.
Q. Define MorphX  
The MorphX Development Suite is the integrated development environment (IDE) in Microsoft Dynamics AX used to develop and customize both the Windows interface and the Web interface.
Q. Define X++  
X++ is the object-oriented programming language that is used in the MorphX environment.
Q. Differentiate refresh(), reread(), research(), executequery()
refresh() will not reread the record from the database.  It basically just refreshes the screen with whatever is stored in the form cache.
reread() will only re-read the CURRENT record from the DB so you should not use it to refresh the form data if you have added/removed records.  It’s often used if you change some values in the current record in some code, and commit them to the database using .update() on the table, instead of through the form datasource.  In this case .reread() will make those changes appear on the form.
research() will rerun the existing form query against the data source, therefore updating the list with new/removed records as well as updating existing ones.  This will honour any existing filters and sorting on the form.
executeQuery() is another useful one.  It should be used if you have modified the query in your code and need to refresh the form.  It’s like
research() except it takes query changes into account.
Q. Define AOT
The Application Object Tree (AOT) is a tree view of all the application objects within Microsoft Dynamics AX. The AOT contains everything you need to customize the look and functionality of a Microsoft Dynamics AX application
Q. Define AOS  
The Microsoft Dynamics AX Object Server (AOS) is the second-tier application server in the Microsoft Dynamics AX three-tier architecture.
The 3-tier environment is divided as follows:
1. First Tier – Intelligent Client
2. Second Tier – AOS
3. Third Tier – Database Server
In a 3-tier solution the database runs on a server as the third tier; the AOS handles the business logic in the second tier. The thin client is the first tier and handles the user interface and necessary program logic.
Q. Difference between temp table and container.
1. Data in containers are stored and retrieved sequentially, but a temporary table enables you to define indexes to speed up data retrieval.
2. Containers provide slower data access if you are working with many records. However, if you are working with only a few records, use a container.
3. Another important difference between temporary tables and containers is how they are used in method calls. When you pass a temporary table into a method call, it is passed by reference. Containers are passed by value. When a variable is passed by reference, only a pointer to the object is passed into the method. When a variable is passed by value, a new copy of the variable is passed into the method. If the computer has a limited amount of memory, it might start swapping memory to disk, slowing down application execution. When you pass a variable into a method, a temporary table may provide better performance than a container
Q. What is an EDT, Base Enum, how can we use array elements of an EDT?
EDT – To reuse its properties. The properties of many fields can change at one time by changing the properties on the EDT. Relations can be assigned to an edt are known as Dynamic relations.
EDT relations are Normal and Related field fixed.
Why not field fixed – field fixed works on only between two tables 1- 1 relation. And Related field fixed works on 1- many tables.so edt uses related field fixed.
BaseEnum – which is a list of literals. Enum values are represented internally as integers. you can declare up to 251 (0 to 250) literals in a single enum type. To reference an enum in X++, use the name of the enum, followed by the name of the literal, separated by two colons . ex -NoYes::No.
Q. Definition and use of Maps, how AddressMap (with methods) is used in standard AX?
Maps define X++ elements that wrap table objects at run time. With a map, you associate a map field with a field in one or more tables. This enables you to use the same field name to access fields with different names in different tables. Map methods enable to you to create or modify methods that act on the map fields.
Address map that contains an Address field. The Address map field is used to access both the Address field in the CustTable table and the ToAddress field in the CustVendTransportPointLine table
Q. What is the difference between Index and Index hint?
Adding the “index” statement to an Axapta select, it does NOT mean that this index will be used by the database. What it DOES mean is that Axapta will send an “order by” to the database. Adding the “index hint” statement to an Axapta select, it DOES mean that this index will be used by the database (and no other one).
Q. How many types of data validation methods are written on table level?
validateField(),validateWrite(),validateDelete(),aosvalidateDelete(),
aosvalidateInsert(), aosvalidateRead(),aosvalidateUpdate().
Q. How many types of relations are available in Axapta, Explain each of them.
Normal Relation: enforce referential integrity such as foreign keys. For displaying lookup on the child table.
Field fixed: works as a trigger to verify that a relation is active, if an enum field in the table has a specific value then the relation is active. It works on conditional relations and works on enum type of data.
Ex- Dimension table
Related field fixed: works as a filter on the related table.it only shows records that match the specified value for an enum field on the related table.
Q. Difference between Primary & Cluster index.
Primary index: It works on unique indexes. The data should be unique and not null. Retrieve data from the database.
Clustered Index: It works on unique and non unique indexes.retrieve data from the AOS.
The advantages of having a cluster index are as follows:
1. Search results are quicker when records are retrieved by the cluster index, especially if records are retrieved sequentially along the index.
2. Other indexes that use fields that are a part of the cluster index might use less data space.
3. Fewer files in the database; data is clustered in the same file as the clustering index. This reduces the space used on the disk and in the cache.
4. The disadvantages of having a cluster index are as follows:
5. It takes longer to update records (but only when the fields in the clustering index are changed).
6. More data space might be used for other indexes that use fields that are not part of the cluster index (if the clustering index is wider than approximately 20 characters).

If you want more Visit   Mindmajix
Author
Lianamelissa is Research Analyst at Mindmajix. A techno freak who likes to explore different technologies. Likes to follow the technology trends in market and write about them.

Friday 5 January 2018

Apache Solr Interview Questions

If you're looking for Apache Solr Interview Questions, you are at right place. There are lot of opportunities from many reputed companies in the world. According to research Apache Solr has a market share of about 15.89%. So, You still have opportunity to move ahead in your career in Apache Solr. Mindmajix offers advanced Apache Solr Interview Questions that helps you in cracking your interview & acquire dream career.
Q. WHAT IS APACHE SOLR?
Apache Solr is a standalone full-text search platform to perform searches on multiple websites and index documents using XML and HTTP. Built on a Java Library called Lucence, Solr supports a rich schema specification for a wide range and offers flexibility in dealing with different document fields. It also consists of an extensive search plugin API for developing custom search behavior.
Q. WHAT FILE CONTAINS CONFIGURATION FOR DATA DIRECTORY?
Solrconfig.xml file contains configuration for data directory.
Q. WHAT FILE CONTAINS DEFINITION OF THE FIELD TYPES AND FIELDS OF DOCUMENTS?
schema.xml file contains definition of the field types and fields of documents.
Q. WHAT ARE THE FEATURES OF APACHE SOLR?
  • Allows Scalable, high performance indexing Near real-time indexing
  • Standards-based open interfaces like XML, JSON and HTTP
  • Flexible and adaptable faceting
  • Advanced and Accurate full-text search
  • Linearly scalable, auto index replication, auto failover and recovery
  • Allows concurrent searching and updating
  • Comprehensive HTML administration interfaces
  • Provides cross-platform solutions that are index-compatible
Q. WHAT IS APACHE LUCENE?
Supported by Apache Software Foundation, Apache Lucene is a free, open-source, high-performance text search engine library written in Java by Doug Cutting. Lucence facilitates full-featured searching, highlighting, indexing and spellchecking of documents in various formats like MS Office docs, HTML, PDF, text docs and others.
Q. WHAT IS REQUEST HANDLER?
When a user runs a search in Solr, the search query is processed by a request handler. SolrRequestHandler is a Solr Plugin, which illustrates the logic to be executed for any request.Solrconfig.xml file comprises several handlers (containing a number of instances of the same SolrRequestHandler class having different configurations).
Q. WHAT ARE THE ADVANTAGES AND DISADVANTAGES OF STANDARD QUERY PARSER?
Also known as Lucence Parser, the Solr standard query parser enables users to specify precise queries through a robust syntax. However, the parser’s syntax is vulnerable to many syntax errors unlike other error-free query parsers like DisMax parser.
Q. WHAT ALL INFORMATION IS SPECIFIED IN FIELD TYPE?
A field type includes four types of information:
  • Name of field type
  • Field attributes
  • An implementation class name
  • If the field type is Text Field , a description of the field analysis for the field type. 
If you want more visit   Mindmajix



Explore more courses visit   mindmajix