24 April 2018
20 April 2018
Consumer Protection
A few areas of consumer protection that provide for certain indicators of measure for rights of consumers, fair trading practices, competition, and accurate information in the marketplace:
- Access
- Complaints Handling
- Dispute Resolution and Redress
- Economic Interests
- Education and Awareness
- Empowerment Index
- Protection Index
- Fraud Detection
- Governance and Participation
- Information and Transparency
- Verifiable Practices and Standards
- Privacy and Data Security
- Safety and Reliability
- Product and Service Reviews
Labels:
big data
,
data science
,
ecommerce
,
economics
,
fraud
,
legal
,
predictive analytics
,
society
Identity and Access Management
Tools:
A Few Machine Learning Use Cases in IAM:
- OpenAM
- OpenSSO
- Shibboleth
- OpenDJ
- OpenIDM
A Few Machine Learning Use Cases in IAM:
- Provisioning accounts and permissions management
- Dynamic risk scoring
- Identification of Friend or Foe
- Fraud and Threat patterns via detection of anomalies
- Feature Engineering (attributes, subjects, resources, environments, roles, entitlements)
- Rule profiling using decision functions
- Clustering to identify threshold patterns, excess, shared identity attributes, overlaps
- Potential for use with blockchain for digital identity and trust
- Deep identification with biometrics and fingerprints
- Mining for visibility of IAM and Security Information and Event Management
Labels:
big data
,
Cloud
,
data science
,
deep learning
,
machine learning
,
predictive analytics
,
security
18 April 2018
Consumer Behavior
Consumer spending behavior is directly correlated to household income that dictates disposable income. One can build a user profile of consumers with a set of attributes that could be contextualized towards specific market trends. Globally different regions have their own taxation. But, invariably to map an entire user behavior one would have to look at an entire calendar period - day, week, month, year. So, in UK this would pertain to the April-to-April tax year. By doing this one can obtain clearer set of patterns during bank holidays, weekends, weekdays, seasonal, social events, and other periods to glean on specific contextual behaviors. Once an anonymized user is mapped to Y1 the following Y2, Y3, Yn could be mapped to discover historical trends. Using machine learning approaches like clustering can provide for a means of visualization of complex networks to identify churn, segmentation, and intents for conversion. Additionally, semantic enrichment could provide further context for answering specific data science questions and end-to-end predictive storytelling. From looking at big data standpoint it would certainly help to process batch and in stream mode. However, one would have to take into account the difference between processing and event time of recorded behavior as well as to maximize in-memory computation. The below highlight key indicators that could be analyzed for consumer behavior.
- Economic conditions
- Group/Social Influence
- Historical Trends
- Location-Awareness
- Marketing Campaigns
- Personal Preferences
- Purchase Power
Additionally, the following could further add value in cyclical process to identify, discover, and understand:
- Consumer Habits
- Conversion Targets
- Product Choices
- Consumer Reviews and Ratings
- Consumer Sentiments
- Identifying and Predicting Churn, Segmentation, Price Optimization
- Profiling for insights, forecasting, personalized promotions/offers/discounts
- Consumer Experience
- Consumer Price Index
- Consumer Satisfaction Index
- Consumer Protection
- Market Trends
- Consumer Interests - unconscious consumption
- Consumer Intents - conscious search
Labels:
big data
,
data science
,
deep learning
,
ecommerce
,
linked data
,
predictive analytics
,
semantic web
,
text analytics
9 April 2018
Deep Learning Pipelines with Spark
BigDL - CPU Optimized
DeepLearning4J - JVM
DeepLearning Pipelines - Integration
MLLIB Perceptron - Integration
TensorflowOnSpark - Integration
TensorFrames - Integration
DeepLearning4J - JVM
DeepLearning Pipelines - Integration
MLLIB Perceptron - Integration
TensorflowOnSpark - Integration
TensorFrames - Integration
Labels:
artificial intelligence
,
big data
,
data science
,
deep learning
,
distributed systems
,
machine learning
,
spark
4 April 2018
Feature Structure Goals in Spark
Classification & Regression
End Goal:
End Goal:
End Goal:
End Goal:
End Goal:
- Column of type Double to represent Label
- Column of type Vector (Sparse or Dense)
End Goal:
- Column of Users
- Column of Items
- Column of Ratings
End Goal:
- Column of Type Vector (Sparse or Dense)
End Goal:
- DataFrame of Vertices
- DataFrame of Edges
Subscribe to:
Posts
(
Atom
)